#todo :
- https://marginalrevolution.com/marginalrevolution/2022/08/can-an-economy-grow-forever.html
- process inbox & tidy up the headings
- add table of contents
- https://www.sambrenner.xyz/stubborn-attachments-is-a-book-about-uncertainty-2/
# Inbox
## AI stuff
Tyler Cowen thinks that LLMs are as big a deal as the printing press.
"We're standing at what I consider to be a revolutionary moment in history. This to me is like inventing the printing press. It will take a long time for it's major effects to play out but it's a fundamental break in what we were able to do before as opposed to what we can do now."
Some short term examples of why big deal:
1. It's the old Oxford model of tutoring but everyone has access to it.
2. It can write code. Half of all coding will be done by LLMS in 2 years.
3. The way that orgs organise information will totally change. You'll just ask the LLM for things and it will tell you. So much of what goes on in companies is organising and exchanging information. All that will be redone—perhaps within 2 years for the most responsive teams.
TC is often using Chat GPT as a check: have I thought of everything? TC uses it much more than Google.
Tyler thinks of the Sydney stuff as a sign of **alignment** not misalignment. Reporters wanted it to say crazy stuff and they successfully made it do so.
GPT-3 is a reflection of us, you decide which parts of it to tap into.
TC doesn't want to slow down LLM, wants to accellerate. He's more worried about the Her movie scenario than smarter models taking over by grabbing all the resources. Also: we need to stay ahead militarily.
TC: "I'm not sure what the word 'agency' means in this context."
If we really think we cannot make the next printing press a force for good, there must be something fundamentally wrong with us, we should be betting against the Western model.
---
- https://marginalrevolution.com/marginalrevolution/2022/08/can-an-economy-grow-forever.html
5. Often I am suspicious of the method of ‘sequential elimination” in moral reasoning. It might run as follows: “I can show you that X doesn’t matter, therefore we are left with Y as the thing that matters.” Somehow the speaker ought to take greater care to consider X and Y together, and to realize that all of the moral reasoning along the way is going to be imperfect. The “ghost traces” of X may still continue to matter a great deal! What if I argued the following?: “Pascal’s Wager arguments can be used to show that existential risk cannot be allowed to dominate our moral theories, therefore ongoing economic growth has to be the thing that matters.” That too would be fallacious, and for similar reasons, even assuming you saw Pascal’s Wager-type arguments as something to be rejected.
A better approach would be “both X and Y are on the table here, and both X and Y seem to be really important. What kinds of consiliences can we find where arguments for both X and Y work together in similar directions? And that is where we should put our energies. More concretely, that might include finding and mobilizing talent, building better institutions, and making sure we don’t end up controlled by a dominant China.
Talent
https://seanpatrickhughes.substack.com/p/review-of-talent
But if I really get down to what I want to ferret out, the way Cowen and Gross talk about it, it’s three things.
1-Quick systems learning
2-Limitless energy
3-Deep (obsessive) accountability
Dialog with Jess Flanagan
I've become more historicist as I've got older. We in the west are embedded in a society that we should not pull apart and reassemble. We're embedded in some form of common sense morality, there's a history behind us. A lot of things we can't change readily but we can make different alterations at the margin.
I don't have answers to the large scale Parfitian or Rawlsiam or Nozicikian moral questions. I don't think there are absolutes and even if there are I don't think there are many things we can treat as absolutes in real world decision making.
Bostrom view: I think we worry about that too much. If you look at human history its clear we face very concrete dangers: war, environmental problems, conquest, those are how civilisations fall.
I'm not sure there is really a morality across species that are very different and cannot trade with each other. It may be that in some unpleasant way we just have to take sides. And to take the side of a vision of the world that is not just nature but is also human building... I don't think I can justify morally but that is the side I will take. Because the alternative is we all go extinct pretty rapidly. I mean you can he a very conscientious vegan but if you look closely at different parts of your life they're actually all pretty morally unacceptable, where you live, the various supply chains you interact with.
I just don't think there's a utilitarian scale where you can add up the insects on one side and the humans on the other. And so I'm on the side of the humans and the other animals we trade with.
On raising aspirations:
It is very important that people see people like them achieving big things. It enables them to think that they could too. I didn't used to think this, but now I'm a big believer in that.
- https://marginalrevolution.com/marginalrevolution/category/philosophy/page/89
Dan Levy, Maxims for Thinking Analytically: The wisdom of l
egendary Harvard professor Richard Zeckhauser. How many of us will end up getting books such as this in our honor? If you are curious, Zeckhauser’s three maxims for personal life are: “There are some things you just don’t want to know,” “If you focus on people’s shortcomings, you’ll always be disappointed,” and “Practice asynchronous reciprocity.” Zeckhauser, by the way, was on my dissertation committee.
- [https://fivebooks.com/interviews/tyler-cowen-on-information](https://fivebooks.com/interviews/tyler-cowen-on-information)
- [GoodReads reviews of Stubborn Attachments](https://www.goodreads.com/review/show/2618424072?book_show_action=true)
- [Google Scholar list of academic papers](https://scholar.google.com/citations?hl=en&user=9n44NA8AAAAJ&view_op=list_works&sortby=pubdate)
- Tyler's 2009 list of his academic papers
- [https://d101vc9winf8ln.cloudfront.net/documents/28302/original/2009tylervita.pdf?1527600821](https://d101vc9winf8ln.cloudfront.net/documents/28302/original/2009tylervita.pdf?1527600821)
- "Entrepreneurship, Austrian Economics, and the Quarrel Between Philosophy and Poetry," The Review of Austrian Economics, 2003, 16, 1, 5-25.
- "What Do We Learn From the Repugnant Conclusion?" Ethics
- 3“The Epistemic Problem Does Not Refute Consequentialism,” Utilitas, 2006, vol. 18, 04, pp.383-399.
- Rejoinder to David Friedman on the Economics of Anarchy," Economics and Philosophy, 1994, 10, 329-332.
- "Self-Liberation versus Self-Constraint," Ethics, January 1991, 101, 360-373.
- "Normative Population Theory," Social Choice and Welfare, 1989, 6, 33-43.
- "Time, Bounded Utility, and the St. Petersburg Paradox," Theory and Decision, November 1988, 25, 219-223, co-authored with Jack High.
- “Resolving the Repugnant Conclusion,” in The Repugnant Conclusion: Essays on Population Ethics, edited by J. Ryberg and T. Tannsjo. Dordrecht: Kluwer Academic Publishers, 2005, 81-98.
- "How Do Economists Think About Rationality?" In Satisficing and Maximizing, 2004, Oxford University Press, edited by Michael Byron, 213-236.
- 8"Against the Social Discount Rate," co-authored with Derek Parfit," in Justice Across the Generations: Philosophy, Politics, and Society, sixth series, edited by Peter Laslett and James Fishkin, Yale University Press, 1992, 144-161.
- Review of John Broome's Weighing goods: Equality, Uncertainty, and Time,Economics and Philosophy, 1992, 8, 283-285
- Search for more TC comments on Pascal's wager
- Re-read the Callard - TC Cato Unbound thread
- [https://www.goodreads.com/en/book/show/31283667-stubborn-attachments](https://www.goodreads.com/en/book/show/31283667-stubborn-attachments) reviews
- [https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=3631893](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3631893)
- I have some early books, some of them on the arts, that argue wealth is good for aesthetic values.
## Notes to process
### Tyler Cowen on Effective Altruism (St Andrews)
https://www.youtube.com/watch?app=desktop&v=ZzV7ty1DW_c
I'll start uh by giving two big reasons why I'm favorably inclined but then work through a number of details where I might differ from effective altruism so let me give you what I think are the two big pluses they're not the only pluses but to me they're the two reasons why in the the net Ledger it's strongly positive the first is that simply as a youth movement effective altruism seems to attract more talented young people than anything else I know of right now by a considerable margin and I've observed this by running my own project Emergent Ventures for talented young people and I just see time and again the smartest and most successful people who apply get grants they turn out to have connections to the EA movement and that's very much to the credit of effective altruism whether or not you agree with everything there that to me is a more important fact about the movement than anything else you might say about it
Unlike some philosophers, I do not draw a totally rigorous and clear distinction between what you might call the conceptual side of effective altruism and the more sociological side. I think they're somewhat intertwined and best thought of as such.
Three ways EA different from classical utilitarianism
- Emphasis on existential risk. I don't recall seeing this in Bentham or Mill.
- Emphasis in legibility. Reasons for action should be presentible, articulable, reproducible.
- Emphasis on scalability. Longtermism. Etc. Sidgwick has some interest, Parfit was obsessed.
Will has very ambitious view of what philosophy can be. That it can ultimately guide or rule all of your normative decisions. TC view is more modest. See philosophy as one useful tool. There's also personal prudence, there's managerial science, there's economics, history, consulting with friends, a whole bunch of different things. In my view, true prudential wisdom is to somehow have a way of weighting all those different inputs.
when I speak to will or someone like Nick Bostrom they're much more rah-rah philosophy like philosophy is going to rule all these things and they ultimately fall under its rubric and you need philosophy to make them commensurable uh and that I think is something quite significant and effective altruism and that's one of the areas where I depart from what effective altruism uh at least in some of its manifestations would recommend.
another notion you find both in effective altruism and classical utilitarianism is a strong emphasis on impartiality
TC: at the margin, virtually all individuals and certainly all governments are far far too partial. At current margins I'm fully onboard with what you might call the EA algorithm. but at the same time I don't accept it as a fully triumphant philosophic principle that can be applied quite generally across the board or as we economists would say intramarginally
illnesskill in my podcast with him and I don't think he had any good answer to my question
10:09
and I said to El well let's say aliens were invading the
10:14
Earth and they were going to take us over in some way enslave us or kill us and turn over all of our resources to
10:21
their own ends I said would you fight on our side or would you first sit down and
10:27
make a calculation as to whether the aliens would be happier using those resources than we would be now will I
10:35
think didn't actually have an answer to this question as an actual psychological fact
10:41
virtually all of us would fight on the side of the humans even assuming we knew nothing about the aliens or even if we
it seems to me there's always a big enough comparison you can make an absurd enough philosophic thought experiment where when you pose the question should we do X or Y it is impossible to address that question without having a somewhat partial perspective
now it turns out that this view of an impartiality that you you can't be fully impartial it's going to matter for a number of our real world decisions
### Nick Whittaker BPR interview
https://brownpoliticalreview.org/2019/10/bpr-interviews-tyler-cowen/
**Tyler:** Well I’m not a utilitarian _per se_. I would say I’m a consequentialist but there’s a relativistic element to my consequentialism. So questions like, “How many happy plants are worth the life of one baby?” — Maybe there can never be enough. But, I suspect the question just isn’t well-defined. How many dogs should die rather than one human being? I don’t even know what the units are. So, I think the utilitarian part of consequentialism only makes sense within frameworks where there’s enough commonality to compare wellbeing.
[...]
**Nick:** The possibility of existential risks looms behind the logic of _Stubborn Attachments_. You’ve said before that you think that artificial general intelligence is either not possible or, at least, is not an existential risk. What’s everyone getting wrong about artificial general intelligence?
**Tyler:** If I go to Spotify or Netflix, they don’t even recommend stuff I want to hear or see. Right now, A.I. is a bit better than a glorified cash register, but it seems to me quite far from being a potentially destructive force. The real danger to me is evil humans operating machines including A.I., and them inflicting the harm. So the idea that the machines are going to take the initiative just seems very distant. And, if there’s that much destructive power, I’m way more worried about the humans who have a much worse track record.
## Philosophical journeys
https://marginalrevolution.com/marginalrevolution/2006/08/how_to_make_a_p.html
As a young teen I wanted to start with all of Plato’s Dialogues (yes including _Parmenides_, which I loved, but I didn’t finish _The Laws_) plus the major works of modern philosophy. I used the old John Hospers text to identify Descartes, Leibniz, Spinoza, Hobbes, Locke, Berkeley, Hume, and Kant. I read some Aristotle too, although he bored me. Then I read lots of Karl Popper and Brand Blanshard, the old-fashioned defender of rationalism and critic of positivism. I gobbled up George Smith and Antony Flew on atheism. I was influenced by Ayn Rand’s moral defense of capitalism, though I was never impressed by her as a philosopher.
Much later I read Nozick, Rawls, and Parfit. Parfit made by far the biggest impression on me. The other two, however smart, seemed predictable.
In graduate school I read Quine avidly. George Romanos’s book on Quine I found more useful than any single Quine work, although _Word and Object_ and the essay on "Two Dogmas of Empiricism" are the places to start. Quine remains a major influence, including on how I think about blog posts. Which thicket of assumptions might lead one to a possible conclusion? I took a class on philosophy of language with Hilary Putnam and developed interests in Kripke and others, but they never displaced Quine in my affections. I developed a fondness for William James. From Rorty I saw more value in the Continentals, although I prefer to misread them. I flirted with the early German romantics and their rejection of philosophy, at times mediated through J.S. Mill.
Later experience with Liberty Fund interested me in "deep" readings of Montesquieu, Tocqueville, Maimonides, and some of the other "Straussian" texts. I’ve never been a Straussian, though. I’ve made attempts to understand Heidegger but without any success.
Right now the philosophy journals I read are _Ethics_ and _Philosophy and Public Affairs_. When it comes to metaphysics, mind-body problems, and the like, I prefer books, usually of a semi-popular nature. The academic debates on these topics are too rarified to interest me very much.
[...]
Philosophy books are more like self-help tomes, or fun record albums, than they let on.
## Amia Srinivasan
See: [[=Amia Srinivasan]].
## Tyler as appreciator
Ezra September 2021 interview.
If I may complain about the complainers: you may be correct, but it makes you stupider, just focus on building.
...
I call it cracking cultural codes
...
Books are overrated. **What can I do with my body, with respect to this question?**
## Tyler on Why I don't believe in God
https://marginalrevolution.com/marginalrevolution/2017/05/dont-believe-god.html
In general, I am opposed to the term “atheist.” It suggests a direct rejection of some specific beliefs, whereas I simply would say I do not hold those beliefs. I call myself a “non-believer,” to reference a kind of hovering, and uncertainty about what actually is being debated. Increasingly I see atheism as another form of religion.
[...]
4. I am struck by the frequency with which people believe in the dominant religions of their society or the religion of their family upbringing, perhaps with some modification. (If you meet a Wiccan, don’t you jump to the conclusion that they are strange? Or how about a person who believes in an older religion that doesn’t have any modern cult presence at all? How many such people are there?)
This narrows my confidence in the judgment of those who believe, since I see them as social conformists to a considerable extent. Again, I am not sure this helps “atheism” either (contemporary atheists also slot into some pretty standard categories, and are not generally “free thinkers”), but it is yet another net nudge away from “I believe” and toward “I do not believe.” I’m just not that swayed by a phenomenon based on social conformity so strongly.
That all said I do accept that religion has net practical benefits for both individuals and societies, albeit with some variance. That is partly where the pressures for social conformity _come from_. I am a strong Straussian when it comes to religion, and overall wish to stick up for the presence of religion in social debate, thus some of my affinities with say Ross Douthat and David Brooks on many issues.
5. I am frustrated by the lack of Bayesianism in most of the religious belief I observe. I’ve never met a believer who asserted: “I’m really not sure here. But I think Lutheranism is true with p = .018, and the next strongest contender comes in only at .014, so call me Lutheran.” The religious people I’ve known rebel against that manner of framing, even though during times of conversion they may act on such a basis.
I don’t expect all or even most religious believers to present their views this way, but hardly any of them do. That in turn inclines me to think they are using belief for psychological, self-support, and social functions. Nothing wrong with that, says the strong Straussian! But again, it won’t get _me_ to belief.
6. I do take the William James arguments about personal experience of God seriously, and I recommend his [The Varieties of Religious Experience: A Study in Human Nature](https://www.amazon.com/Varieties-Religious-Experience-Study-Nature/dp/1439297274/ref=sr_1_1?ie=UTF8&qid=1495491465&sr=8-1&keywords=william+james+varieties/marginalrevol-20) to everybody — it’s one of the best books period.
## On radical uncertainty
https://marginalrevolution.com/marginalrevolution/2006/11/should_we_disco.html
I am closer to a Bayesian myself. But even if we take the Knightian view at face value, it does not diminish the importance of the future. **Whether or not we call expected value calculations "scientific" or "stupid," we still need to make choices about the future.** A woman might think "I simply can’t imagine what sort of man I might marry." He might even be some hitherto unimagined extraterrestrial being. But her parents should still set aside some money for the possible ceremony.
## On reasons to be dogmatic
https://marginalrevolution.com/marginalrevolution/2008/08/the-five-best-r.html
In strict Bayesian terms, most innovators are not justified in thinking that their new ideas are in fact correct. Most new ideas are wrong and the creator’s "gut feeling" that he is "onto something" is sometimes as epistemologically dubious as is the opinion of the previous scientific consensus. Yet we still want that they promote these new ideas, even if most of them turn out to be wrong.
In this view, the so-called "reasonable" people are selfishly building up their personal reputations at the expense of scientific progress. They are too reasonable to generate new ideas.
To put it another way, there are two kinds of truth-seeking behavior:
1. Hold and promote the view which leads to society most likely settling upon truth in the future, or
2. Hold and promote the view which is most likely to be correct.
These two strategies coincide less than many people think.
## Tyler on "why don't we build beautiful neighborhoods anymore?"
#tweet
**COWEN:** Now, if I walk around Paris, London, New York, Dublin — almost any city you care to name — I see beautiful, older urban neighborhoods, not everywhere, but really quite a few of them if the city has not been bombed into oblivion. But I don’t see newly created, wonderful, beautiful urban neighborhoods _really_ anywhere. Why is that?
**GLAESER:** I’m not sure that I agree with you. Certainly, the older cities of this country have regulated themselves into stasis. We don’t get beautiful new neighborhoods because historic preservation is a completely binding rule in London, in Paris, and in New York. It’s just very difficult to change the neighborhood in a way that works.
In terms of newer neighborhoods, more generally, part of the issue is that we always build our cities around the transportation technology that is dominant in the era in which it’s being created. For the past 80 years, the dominant technology has been the car, which means there are lots of nice places for you to drive around in America, but there aren’t that many new pedestrian neighborhoods for you to walk around.
**COWEN:** But there have been plenty of buildings, say, outside of the central core of Paris, sometimes even in the Paris region, but all sorts of other parts of France, and it’s never beautiful. The French have an incredible culture, an amazing eye, highly sophisticated, maybe willing to sacrifice cheapness for beauty. Yet, they seem incapable of building new, beautiful neighborhoods, whether they’re walkable. We have old people’s homes which are completely walkable. They’re never beautiful. Nothing’s beautiful.
**GLAESER:** My father was an architectural historian, Tyler. He was a curator of the Museum of Modern Art. So, I have a certain feeling that we as economists are not necessarily the best judges of the beautiful. What’s your view of the [Centre Pompidou](https://www.dezeen.com/2019/11/05/centre-pompidou-piano-rogers-high-tech-architecture/)? Do you think beautiful or not beautiful? What’s your view on Renzo Piano?
**COWEN:** I think there are many excellent individual buildings. [Bilbao Guggenheim](https://www.architectural-review.com/buildings/guggenheim-museum-in-bilbao-spain-by-frank-o-gehry-associates). There’s a long list, but neighborhoods is where we’re falling short, not individual buildings. Renzo Piano — amazing. [[Peter] Zumthor](https://www.archdaily.com/364856/happy-70th-birthday-peter-zumthor) — go all the way down the list. But neighborhoods — I don’t see them.
**GLAESER:** How about Barcelona, the parts of Barcelona that were built after the Olympics? That’s a perfectly walkable neighborhood and not a terribly unpleasant place.
**COWEN: “**Great to live in, not terribly unpleasant” is a wonderful description of it. That’s the best we can do? We’re so much wealthier. It’s crazy. I don’t get it.
**GLAESER:** It’s an interesting question. What about areas in Asia? All of Seoul is new. There are no blocks in Seoul that you find inspiring?
**COWEN:** I like them. I enjoy Seoul, but the really beautiful parts of Seoul seem to be the older remnants, which are not many at this point.
## Tyler on Robin on Tyler on Robin
[Tyler on Robin on Tyler on Robin](https://marginalrevolution.com/marginalrevolution/2007/07/assorted-link-2.html)
In some ways I think of [the whole book](http://truckandbarter.com/mt/archives/2007/07/discovering_my.html) as an (attempted) rebuttal to Robin. Robin is the rational constructivist, the logical atomist, the reductionist, and the extreme Darwinian. The Inner Economist is trying to reconcile (modified) economic reasoning and a (modified) version of common sense morality.
But…for the secularist reductionism beckons and seduces. Imagine an intellectual war with Darwin, Fourier, Comte, early Carnap, David Friedman and millenarian Christian eschatology on one side (that’s my mental image of how Robin maps into the history of ideas), with bits from Henry Sidgwick, Hayek, Quine, and William James on the other side, yet within the framework of modern microeconomics and with ongoing references to the blogosphere. I am (implicitly) defending gradualism, pluralism, the partial irreduciblity of individual choice, the primacy of civilization, and yes also [a certain degree of social artifice](http://www.overcomingbias.com/2007/07/only-losers-ove.html#more).
Note that Robin is wrong to suggest I don’t reply to his views. ** I paint him as engaged in a subjective quest — including on bias — rather than standing from an Archimedean point.** And within the realm of subjective quests, I try to outline a superior one, especially in the last few chapters of the book. He doesn’t like being relativized in this fashion, and that he doesn’t see me as replying to him is itself an indicator of our underlying differences.
Still, I know I have to be afraid of Robin! Most people who don’t find Robin’s ideas compelling are simply unwilling to face up to the holes in what they believe.
Wake up, and take at least a sip from the Robin Hanson Kool-Aid. Life will never be the same again.
## What we learned from Fast Grants
https://future.a16z.com/what-we-learned-doing-fast-grants/
We ran a survey of Fast Grants recipients, and asked some broader questions about their views on science funding.
57% of respondents told us that they spend more than one quarter of their time on grant applications. This seems crazy. We spend enormous effort training scientists who are then forced to spend a significant fraction of their time seeking alms instead of focusing on the research they've been hired to pursue.
...
In our survey of the scientists who received Fast Grants, 78% said that they would change their research program “a lot” if their existing funding could be spent in an unconstrained fashion.
...
We all want more high-impact discoveries. 81% percent of those who responded said their research programs would become more ambitious if they had such flexible funding. 62% said that they would pursue work outside of their standard field (which the NIH explicitly discourages), and 44% said that they would pursue more hypotheses that others see as unlikely (which, as a result of its consensus-oriented ranking mechanisms, the NIH also selects against).Many people complain that modern science is too frequently focused on incremental discoveries. To us, this survey makes clear that such conservatism is not the preference of the scientists themselves. Instead, we've inadvertently built a system that clips the wings of the world's smartest researchers, and this is a long-term mistake.
See also [[=Patrick Collison]].
## Daniel Frank summary
https://marginalrevolution.com/marginalrevolution/2021/05/daniel-frank-on-me-his-introduction-to-tyler-cowen.html
https://danfrank.ca/my-favourite-tyler-cowen-posts-and-ideas/
## George mason GMU pod interview
Meta: Tyler is really trying to inspire and be an exemplar in this interview, to nurture the [[Scenius]] at George Mason and beyond.
Started fast grants after a few hours chatting with Patrick C. Next day software was being written, team of referees assembled, repurposed emergent ventures finance team. We put word out on twitter and In less than a week we had 5000 mostly serious applications and we were sending money out.
Just try it and iterate get quick feedback let reality clobber you in the head.
Make the world tell you no. It's not that everyone is gonna succeed with every idea. Take in feedback, respond to criticism have a great set of peers and mentors. At the end of the day you need to try with your dream or else it is never going to happen, because if you're not behind it 100% noone else will be either. So you know can do positive optimism even in troubled times, very important to keep that attitude and there's plenty wrong in the world we might even disagree what's wrong, but don't contain too much, move forward with your idea, people might disagree with your idea but they will respect you. But if you self sabotage here's seven reasons why I can't do it I mean I dunno, I think people complain too much, if I may be allowed to complain about complaining. Believe it or not even some people in academia complain.
Quake books growing up:
- Hayek
- Friedman
- some of Ayn Rand don't like her as philosopher but like her belief in capitalism
Two key insights of economics:
- There are always tradeoffs
- incentives matter
Thinking through how those truths affect our world is to me the main thing economics does. Key things we should teach our students
Meta: I notice how well Tyler has pushed (1) into my mind.
Talent hiring is not fixed rules, I think it ts closer to music or art appreciation. Can't boil down to fixed rules but if you spend a lot of time studying you can become much better at it.
For most jobs we overrate intelligence and underrate values. Durability of commitment dedication to mission to yourself, ability to keep going in the face of adversity. Don't obsess over person being just like you, most common mistake almost all interviewers make. People appreciate people who are just like themselves. You should always be trying to jolt yourself out of that habit by looking for someone different from yourself.
People who can practice and train, in the way that an athlete or a concert pianist would, for any job, I look for them. They wake up every morning and they ask themselves what can I do today to get better? A lot of days they might fail at that endeavour, that's fine. But if they ask that question every day I am very interested in them.
I live by that Philosophy, whatever you do just try to do it everyday and see what happens. That is the best way to become smarter. If you just do that you can get so far. The other people who live by this philosophy, they're a minority but they are out there in every field, and they will reocnigse it in you like a light shining. And they will bind with you and the best way to network is to try to be worth networking with.
He has blogged every day for 18 years he is 59 now, so he started around age 41.
To fight climate change, have more children.
Tyler doesn't trade actively, doesn't own crypto. Partly because wife works for SEC, but mainly because doesn't want to get distracted.
## https://www.persuasion.community/p/-why-governments-fail
I agree with Martin Gurri that, because of the internet, there's been a collapse of faith in many layers of authority. There's a general decline of faith in religious authorities, and political authorities and public health experts. You can go on down the list. And then, that takes many particular manifestations. One of them is populism which, as you describe it, I don't like. But I don't think the populists are the enemy. I view them as one symptom of a broader transformation. And where the world needs to head is to establish new means of producing credibility and good reputation that are robust to current technologies. What we’re now calling “populism” might turn out to be the least of our problems.
the work of Piketty, I think has been refuted. The rise in wealth inequality seems to have come through an increase in the value of land, driven in large part by NIMBYism, not from some kind of superior return to capital. So in my view, that's just wrong.
Mounk: Say a little bit more about that. So NIMBYism means “not in my back yard” -ism. And obviously, the huge increase in property values in many Western countries comes from the great difficulty of building new housing. But why does that refute Piketty’s theory?
Cowen: Well, land is worth much more in London, in San Francisco. If you take out that increase in the value of land, which of course accrues only to landowners, the increase in wealth inequality basically goes away. And Matt Rognlie showed that numerically, and there's never been an effective response. So, I say Piketty is wrong.
Mounk: And what's the wider implication of that? Part of the idea of Piketty’s work was to say that the natural tendency of capitalism is towards greater and greater inequality and to need either these sort of destructive wars, which reduced inequality in the 20th century, or very radical political action in order to stop the rich from getting richer and the poor from stagnating or getting poorer. And what you're saying is: “No, actually, what you need to fix is NIMBYism, and the sort of very artificial increase in the price of land, and that would be enough to make sure that economic gains are more fairly distributed.” Is that roughly the argument?
Cowen: I don't want to use the word “fairly.” I'm not sure what's fair. I'm just saying the observed increase in wealth inequality in these nations goes away when you abstract from land. So capital is not the problem. Let's deregulate building. I'm not saying the result of all that is necessarily fair, that's a tougher question. So, if Apple as a company earns much, much more money by selling iPhones around the world, income inequality is going to go up by a lot. In the globalized system, that’s inevitable. I think that's a good thing, not a bad thing. So inequality is not the problem, poverty is the problem.
## Tyler panel
Very important: innovation in making it easier to raise children. One of the great joys and the great burdens of life. Wealthy countries are depopulating.
# Tyler on building bridges
Maybe I'm really a philosopher who writes about the economy.
TC likes going to second or third tier cities in Europe. Very unspoilt.
I was very opposed to brexit. I'm not anti Brexit anymore, even though I still would note vote for it.
It's hard for Western Europeans to grow up properly disagreeable, partly because the life there is so nice.
For Americans realise that you've grown up in the most provincial country in the world, you don't have a clue what others places are like.
## Tyler Lex Friedman
Repeats the 700-800 years claim, says I've argued for it. \[Where?\] #todo
TC: If it only costs $1m to destroy a city, how long until someone does it.
I don't mean a catastrophe where everyone dies. Just massive setback to civilisation. It's very unclear what happens after that.
\[Has he read Louisa's recovery stuff? Would that update him?\]
English as language of the world makes going to UK or USA even better compared to going to France.
America has much much less social security whether from govt or community than EU.
Ideally you want part of the world to be very innovative and another part to be more risk averse and risk avers and give people these smooth lives and six weeks off and free ride.
Everyone is like American way Vs European way, but basically they are complements.
Is competition good? What really matters is how good your legal framework is. In the animal world competition leads to bloodshed it's quite unpleasant to say the least. If you have something rule of law and clearly defined properly rights which are within reason justly allocated, competition probably is.gokng to work very well. But it's not an unalloyed good thing, military competition can be very deustrutvie, but also sometimes good.
There's a lot of anarchy. We should squish our anarchy into the right corners. Don't dump on the anarchists, listen carefully and learn what's right within than point of view.
I read Plato before I read Ayn Rand and in the dialogues you see that the wisdom comes from the coming together of many perspectives, and Ayn Rand didn't have that.
Boswell's Life of Johnson biography, one of the greatest philosophy books ever.
JSM better philosopher than is realised, coauthor with Harriet.
Shakespeare, maybe the wisest thinker of them all.
I think we are beings of high neuroticism. Not everyone but most people. If someone says 10 nice things and 1 nasty thing about us were much more bothered by the latter, especially if it is somewhat true.
Gurri thesis: the more you see of things the more you find to complain about. Internet means more transparency means trouble.
TC: a lot of the great creators did not have huge cushions. Whether it's Mozart or James Brown, Van Gogh. If you look at heirs to great fortunes, maybe I'm forgetting someone, but it's hard to think of any that were important. \[Wittgenstein, Schopenhauer\].
No-one knows what money is. Bitcoin has taken over a lot of the space that used to be held by gold, that looks sustainable. I'm not short bitcoin.
The Burden people are going to regulate crypto, and they're going to do it soon.
Use of knowledge in society
Plato dialogues
Most good advice is context specific but here are my two generic pieces of advice:
\- First get a mentor… for the things you want to learn.
\-- how to find: be interesting be direct and try. It's amazing how many ppl dont even try.
\- second, build small groups of peers. With broadly similar interests. Ppl you hang out with ideally in person. Every day they're talking about the things you care about and that's your small group and you really like them and they like you and you have this common interest.
To learn about love go to Keats.
Bruce Springsteen born to run.
"When they write the history of the universe, life on earth will be a sentence."
So we have to care about the scale we can care about. It's fine for us to care at our own scale.
### Tyler on Malthus
[https://www.youtube.com/watch?v=mcsJ1rXjt1Y&feature=emb\_title](https://www.youtube.com/watch?v=mcsJ1rXjt1Y&feature=emb_title)
Malthus openly obsessed with sex and food.
Malthus influenced Darwin, idea of environmental constraints leading to population fluctuations and influencing people.
### Some other interview
My overall view is that ethical intuitionism settles many fewer issues than most of its proponents like to think. That said, there is often nowhere else to go. We somehow need to come to terms with two propositions at the same time:
1\. We need to think more rather than less ethically.
2\. The content of ethical philosophy tells us less, in reliable terms, than most people would like to believe.
One other thing I'd just like to mention about the book is how much time I spend discussing agnosticism; and that there needs to be room for a radical agnosticism in any approach to politics or economics. But at the same time, in the big picture you can still believe in things strongly. So, you can be truly unsure what you know what the best policy is; but you have some assessment. And when you believe in this power of compounding sustainable economic growth, the force of that far out is enough that you can be agnostic about a lot of your concrete judgments but still believe very passionately in doing a lot now to reach these ends. And, balancing the skeptical perspective of Hume and Hayek with rationalism is another underlying set of themes running throughout the book.
### Tyler Cowen Joseph Walker Swagmen
Act Vs rule utilitarianism
The different utilitarian proclamations are all asking you to maximise subject to a constraint. Best way to think about Act Vs rule is that they specify different constraints, not that they give different instructions. Societies that can get themselves into a rule consequentialist frame of mind on average tend to do much better, so let's not discriminate against that, let's lean into that as best we can.
What to do about uncertainty?
…
Uncertainty should not paralyse you: try to do your best, pursue maximum expected value, just avoid the moral nervousness, be a little Straussian about it. Like here's a rule on average it's a good rule we're all gonna follow it. Bravo move on to the next thing. Be a builder.
So… Get on with it?
Yes ultimately the nervous Nellies, they're not philosophically sophisticated, they're over indulging their own neuroticism, when you get right down to it. So it's not like there's some brute let's be a builder view and then there's some deeper wisdom that the real philosophers persue. It's you be a builder or a nervous Nelly, you take your pick, I say be a builder.
Benjamin Friedman argues that successful democracies require a surplus that you use to get most people to agree and obtain consensus moving forward.
Pentagon UFO videos...
Harry Reed thinks there's something going on, evidence that hasn't come out. That gets me up to at least 1%.
Repeats the claim that we'll be lucky to last another 1000 years due to nuclear weapons.
Podcasts with live audience: Live audience gives feedback but I'm not sure now it's good feedback. So the audience loves humour and entertainment and it's great in the moment. But does it make for a better podcast? Maybe we have too much feedback. Like I don't check how many people download each episode, it seems to me like too much feedback.
What has MR and CWT taught you about how to build an audience?
Uh not to worry about building an audience, do something you care about and really don't check the meter or whatever you've got.
Thiel: most people just don't realise it. They think he's just this guy who says a few provocative things… they don't get how smart Peter really is.
I fold pages but I don't take notes, it seems like just another thing to handle, inefficient.
I think America's super educated high conscientiousness people in a sense don't need religion or you could say they have their own religion however you want to put it, but most of the country isn't that. It's as if the elites have foisted secularism on everyone for their own benefit, like so they can have more sex, drink when they want, take whatever drugs they want to experiment with, and it's bad for much of the country.
## [https://marginalrevolution.com/marginalrevolution/2014/04/nick-becksteads-conversation-with-tyler-cowen.html](https://marginalrevolution.com/marginalrevolution/2014/04/nick-becksteads-conversation-with-tyler-cowen.html)
Question: What are your thoughts on the effective altruism movement in general—how familiar are you with it? If you are, are there things you wish we were doing that we aren’t doing or things you wish we were doing differently?
Tyler likes it, supports GiveWell on his blog, and donates to GiveDirectly. But it’s small potatoes in comparison with, say, innovation.
Tyler’s intuition is that improving marketing is a key issue for effective altruism, rather than fine-tuning where people should be giving.
Tyler thinks about the future and philosophical issues from a historicist perspective. When considering the future of humanity, this makes him focus on war, conquest, plagues, and the environment, rather than future technology. He acquired this perspective by reading a lot of history and spending a lot of time around people in poor countries, including in rural areas. Spending time with people in poor countries shaped Tyler’s views a lot. It made him see rational choice ethics as more contingent. People in rural areas care most about things like fights with local villages over watermelon patches. And that’s how we are, but we’re living in a fog about it.
Rational choice ethics and the “Straussian truths of the great books” The truths of literature and what you might call “the Straussian truths of the great books”—what you get from Homer or Plato—are at least as important rational choice ethics. But the people who do rational choice ethics don’t think that. If the two perspectives aren’t integrated, it leads to absurdities— problems like fanaticism, the Repugnant Conclusion, and so on. Right now though, rational choice ethics is the best we have—the problems of, e.g., Kantian ethics seem much, much worse.
If rational choice ethics were integrated with the “Straussian truths of the great books,” would it lead to different decisions? Maybe not—maybe it would lead to the same decisions with a different attitude. We might come to see rational choice ethics as an imperfect construct, a flawed bubble of meaning that we created for ourselves, and shouldn’t expect to keep working in unusual circumstances.
https://marginalrevolution.com/marginalrevolution/2006/11/robin\_hanson\_is.html
I do not go as far as Robin in my desire to preach truth-seeking. With all due respect to the truth, I find something Quixotic in such a quest. I view Robin as believing in a kind of Archimedean point, from which we could be objective truth-seekers if only we had the will. My view is closer to that of Pascal. Yes we should seek self-improvement, but we are weak and in the dark no matter what. An excessive attachment to "truth-seeking," might even divert us from the pragmatic, skeptical pluralism — laden with a healthy dose of ego to get the work done — most likely to lead society closer to truth.
\---
### Transhumanism
That being said, the economist in me asks not "whether" but rather "at what margin"? Is there any margin at which concerns of identity should cause us to reject otherwise beneficial transhumanist improvements?
Most people want their children to look like themselves, and to some extent to think like themselves. We invest many thousands of dollars and many months of our time to acculturate our children. Now let’s say your children could be one percent happier throughout their lives, but this would mean they were totally unlike you, the parent. In fact your children would be turned into highly intelligent velociraptors and flown to another planet to live among their own kind. How many of us would choose this option? I can think of a few responses:
1\. Transhumanism will bring improvements of more than one percent; we should forget about identity and let everyone become healthier and happier. What’s wrong with uploads?
2\. Governments should not restrict transhumanist innovation. Let people and their children choose their degrees of identity continuity for themselves. (Isn’t there a collective action problem here? Everyone wants a more competitive kid but at the end humanity is very different.)
3\. The parental analogy is not relevant for policy choices. Parents should be partial across identities, but governments should be more neutral. And surely uploads will still be allowed to vote, no?
4\. Identity attachments are, very often, petty and small-minded to considerable degree. We should be cosmopolitan across chimpanzees and intelligent velociraptors, not to mention enhanced humans.
---
**Nick:** Is death bad, in the sense that [Eliezer Yudkowksy](http://yudkowsky.net/singularity/simplified/) or [Peter Thiel](https://www.inc.com/jeff-bercovici/peter-thiel-live-forever.html) think that it is?
**Tyler:** I don’t think death is as bad as they think death is. I think there’s something about accepting one’s limits that is a positive side of death. I think Silicon Valley should appreciate that better than it does and I think the chance that even our grandchildren will live forever, or even to age 300, is very, very, very small.
# By theme
## Longtermism and existential risk
- SA depends on time horizon -- not too long, not too short. If very long you stop caring about growth and just become very risk averse, you only care about safety.
- In the Stanford Talk, I estimated "in semi joking but also semi serious fashion, that we had 700 or 800 years left in us".
- "I am not a space optimist, I think the speed of light, the difficulties of travel are really binding constraints, and maybe there will be vacations on the moon or something, but basically what we have to work with is earth."
- if you are a space optimist you may think that we can relax more about safety once we begin spreading to the stars. "you can get rid of that obsession with safety and replace it with an obsession with settling galaxies. but that also has a weirdness that I want to avoid, because that also means that something about the world we live in does not matter very much, you get trapped in this other kind of Pascal's wager, where it is just all about space and NASA and like fuck everyone else, right? And like if that is right it is right. but my intuition is that pascal's wager type arguments, they both don't apply and shouldn't apply here, that we need to use something that works for humans here on earth."
- why do you think we only have 800 years? "Uh weapons of mass destruction. \[...\] If you let the clock tick out long enough, I don't think you have to believe that literally every human being will die, but just that civilisation will cease to exist.
Tyler Cowen: Philosophers and economists should be shouting about it much more. I think some of the problem is a political one. I find it relatively easy to convince a lot of philosophers the moral rate of time discount should be zero, but relatively hard to get them to accept the practical implications of that, namely, that ongoing economic growth is a very, very positive thing — say, more important than redistributing income.
It would be more of a problem for the argument if you thought growth and stability were always at loggerheads. But there are large numbers of societies that collapse because they don’t grow enough. They can’t fend off, say, drought or weather problems or problems in their agriculture in world history, or they’re conquered by someone else. If the Unites States stopped growing, I feel a lot of free countries in the world would collapse or be taken over, or they would become unfree. If we grow at a very low rate, our budget will explode It will cut back on our discretionary spending, our ability to advance science to protect the world against an asteroid coming. So, yes I absolutely think it applies today.
Robert Wiblin: I think I agree that if the US stops growing that would be very bad, principally because of the cultural and political effects that that would have and perhaps that we’ve started to see over the last five years. But doesn’t that suggest that we want a sufficiently high level of growth? One that keeps people happy and looking forward to the future and being willing to accept some negative shocks because they know that things are going to get better in the future anyway? And that we don’t necessarily have to go from 4 percent GDP growth to 8 percent GDP growth — that’s not necessarily going to make things more stable.
Tyler Cowen: You’re talking about going from 4 to 8 percent. You may or may not think that’s stabilizing, but the actual reality is, we’re in the midst of one of our most wonderful labor market recoveries, there’s been a big fiscal stimulus. And year on year, we’re doing 2.7 percent, which is very poor compared to our past performance. You see a lot of recoveries where we grow at 4 percent or more just to get back to where we were. The growth engine has slowed down. There’s a lot of evidence — some of which I present in my other books — that technological progress has slowed down.
It doesn’t seem to me we’re close to the margin of growth being so fast that we’re thrown off the track. We have high level of debt in deficits, and we don’t know how to pay it off. And we’re cutting into our future capabilities with infrastructure and military defense, many areas, science.
If a single person wanted to maximize sustainability of human civilization, would you recommend that they focus on economic growth? Or do you think that there’s more leveraged opportunities if they want to set aside making money?
## Future of humanity: space, technological maturity
Rob: Do you think at some point it’s just going to level off because we’ll have done everything we can? We’ll have grabbed all of the matter we can access, and we’ll have figured out the best configuration for it to produce value. And at that point, it’s just a matter of milking it for as long as we can.
Tyler Cowen: No, I think the world will end before that happens.
Robert Wiblin: Why do you think that we won’t leave the galaxy? And also, even if you think that that’s improbable, just given the fact that almost all of the potential value that we can generate is outside of this galaxy because that’s where most of the matter energy is. Shouldn’t we be pretty focused on that possible scenario where, in fact, we do leave the galaxy?
Tyler Cowen: I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.
If you put me in the legislature, I’ll vote to increase funding for space exploration. But relative — especially in the Bay Area — relative to other people I speak to in this kind of fringe group of intellectuals who think about space, I’m more pessimistic than just about all of them.
But it’s also that I’m more optimistic about the earth. The ocean of course is enormous — it could be platforms, it could be underwater. Deserts, places that can be terraformed, cities in the sky — you do want diversification, protection against a big nuclear war. Maybe for that you need other planets. There’s the moon, there’s Mars — they’re actually big enough to have diversification.
## Risk vs growth rate
RW: Do you think watching that in the ’30s and ’40s, we should have been glad that the Soviet Union had a fast rate of economic growth? Or should it have, on balance, concerned us? Both because it would potentially lead to more conflict between countries because you have more great powers, and also, because the person who was leading the Soviet Union was not a very nice guy.
Tyler Cowen: Of course, it should have concerned us, but on net, it was obviously a huge plus because the Soviets stopped the Nazis. But keep in mind also, my wife and daughter were born in the Soviet Union and grew up in a wealthier society. My father-in-law, who still lives with us — he was alive during the time of Stalin, and his life was better. He’s still alive today because Soviets had a higher rate of economic growth.
Soviets urbanized, probably more rapidly than China has done lately — that’s not a well-known fact. The world discovered a lot of talent through that urbanization and people being brought into formal education.
So it had a lot of benefits since Stalin didn’t wipe us out and he beat the Nazis. If you’re looking for any case where a higher rate of growth had a big pay off, I think it’s that one. That’s not the counterintuitive case.
Robert Wiblin: I feel like, ex post, it definitely looks good, but at the point where the Soviet Union got nuclear weapons, I might have said, looking back, I wish that it had not become wealthy that quickly. Because now we have a nuclear standoff, and in 1948 or 1949, you don’t know how stabilized that situation was going to be. Looking forward you might think there’s really a very substantial probability of humanity destroying itself during the Cold War.
Now looking back we can say, “Well it wasn’t so severe.” But you might have thought, actually it would be better if there was just one country — given that we have nuclear weapons, what we really want is just that one country that’s going to a hegemon and dominate the world so that it won’t be a nuclear war, and we can kind of have permanent stability. What do you think of that?
Tyler Cowen: I don’t think we understand stability and nuclear weapons very well. Do keep in mind the two times they’ve actually been used is when only one country had them. It doesn’t mean we have a fully general theory there.
Nuclear weapons have spread, actually, at a slower rate than many people have expected. You read geopolitical theorists after the end of World War II — a lot of them think there’s going to be another nuclear war really soon, and we tend to dismiss them like, “Oh, those silly people, you know, they were just paranoid.” But maybe they were right, and they got lucky, and that’s the true equilibrium. I don’t think we should reject that view.
Tyler Cowen: Let’s imagine it were the case that somehow we actually knew that, if we could construct hobbit society, but with people being taller, say, the world would not end. And if we don’t construct hobbit society, the world will end, say, through nuclear weapons. Let’s say we knew that or we thought 70 percent chance that’s likely to be true.
I still don’t think we actually are good at implementing the means to bring about hobbit society. We would have to become brutal totalitarians. If anything we might accelerate the risk of this nuclear war.
So when you think of the feasible tools at our disposal, that’s kind of outside our current feasible set, hobbit society. We’re on this path, I think we have to manage it. We can’t just slam the brakes on the car — it’ll careen off the cliff.
Our best chance is to master and improve technologies to make nuclear weapons, warning systems, second-strike capability, safer rather than riskier. I just think that’s the path we’re on, and the hobbits are not there for us.
Robert Wiblin: Perhaps if I had to summarize my overall world view in just one quote, it would be this quote from E. O. Wilson: “The real problem with humanity is the following: We have paleolithic emotions, medieval institutions, and god-like technology. And it is terrifically dangerous.”
This highlights my concern with the idea that we ought to increase economic growth which seems to push more on the god-like technology than on improving the paleolithic emotions or the medieval institutions. By focusing on improving technology, we’re increasing the disconnect between the improvement that we’ve had in our engineering and scientific and technological ability and the fact that our personal and moral values and our institutions for governing ourselves have not kept up with that.
So I’d be perhaps more interested in seeing people focus on the emotions and institutions here to get them to catch up with our god-like technology than increasing the technology itself. What do you think of that?
Tyler Cowen: I’m more optimistic than Wilson and perhaps you. He refers to medieval institutions, but in most countries institutions are much better than that. What are the good medieval institutions that stuck around? Like parliament of Iceland? Oxford, where you’ve been? I suppose Cambridge? Maybe a few other schools, but we’ve built so much since then. I don’t mean technology. I mean quality institutions with feedback and accountability.
If you look, say, at how Singapore is run, a lot of the Nordic countries, some parts of American life — by no means all, just to be clear — Canada, Australia, where you’re from. You see remarkable institutions, unprecedented in human history, I don’t take those for granted they’re not automatic. But I think one has to revise the Wilson quote and be more optimistic.
Robert Wiblin: Yeah, so medieval institutions is perhaps and exaggeration. But do you think . . .
Tyler Cowen: But it’s a significant exaggeration, right?
Robert Wiblin: Okay, I think it’s the case that probably political institutions and our decision-making capacity isn’t improving as quickly as our technological capabilities. And I wish it were the other way around that our wisdom and prudence and ability to make decisions that are not risky was maybe moving faster than our technology.
Tyler Cowen: But see, I see it the other way around. If you look at data on economic growth, you see huge productivity improvements: China, India, basically free-riding on existing technologies, not usually making them better. It’s just managing companies better, having better incentives in companies.
If the world economy grew 4-point-whatever percent last year, way more of that, say, 4.8 percent, is coming from better management, better institutions than is coming from new technology. Maybe 1 percent of it is coming from new technology and the rest from better management — in some cases, growing population, capital resources.
So institutions are way out-racing technology right now. Again, I’m not taking that for granted, but I think people would be much more optimistic if they viewed it in that light.
## Terrorism / the apocalyptic residue
But terrorism in general, I don’t think we understand well, so after 9/11, people thought there would be many more attacks. You could ask questions, “Why don’t they just send a few people over the Mexican border? They get here, they buy submachine guns, they show up in a famous shopping mall and they take out 17 people. They don’t get any further than that, but it’s a massive publicity event. And this just happens every two or three weeks.”
A priori, it almost sounds plausible, but nothing like that has happened. If anyone has done that, it’s our native, white Americans who are not, in the traditional sense, terrorists. It’s clearly possible, but they don’t do it. So when you ask, how likely is someone to do something pretty horrible with a pretty cheap decentralized, highly destructive technology? We don’t even see them acting at the current frontier of destructiveness.
What you need in terms of people who are competent enough, motivated enough, coherent enough, have a base to operate from. How hard is it in a combinatorial sense for all those to come together? We don’t know, but I think thinking about it more, you become a little more optimistic rather than less.
Robert Wiblin: I think that’s fair, but it seems like over time, as our technology gets better, the number of people and the amount of expertise and the amount of security that you would need in order to pull off an operation like that is going down and down and down. Eventually, it could end up being a handful of people or even a single individual, and perhaps breakthroughs in biology are the most likely cause of that.
\---
I once asked some of my friends an interesting question: If a single person, by a sheer act of will that they had to sustain for only five minutes, could destroy a city of their choice, how much time would have to pass before one individual on Earth would take the action to destroy that city? Is it like it would occur in two seconds, it would occur in 10 minutes, it would occur within a year? I don’t think we know, but no one should be optimistic about that scenario.
## Global governance, singleton, surveillance
Perhaps, in order to solve that problem, we should be willing to have a world government. Kind of run towards a singleton, as Nick Bostrom calls it, which would be like having one decision-making process that is able to control everyone else, prevent conflicts.
Even if it doesn’t produce the optimal decisions, at least we won’t have extinction. We’ll be able to survive for a lot longer and generate some more value even if the singleton doesn’t make the absolute best decisions that we might think of from a liberal point of view. What do you think of that argument?
Tyler Cowen: It’s hard enough to get the European Union to stay together, and those countries have so much commonality of interest. I expect some further nations, after Brexit, to peel off over time. Try to get Southeast Asia to agree even to a local ASEAN being much stronger, being an EU-like phenomenon — simply impossible. It’s a recipe for creating conflict.
I understand the appeal of the vision. I’m all for NAFTA. I like multilateral institutions, but I think it’s the wrong way to go. The UN is of some use but in many ways an impotent bureaucracy. You would not want it ruling over us. You tend to recreate some of the worst aspects of national bureaucracies and then infuse them into a least-common-denominator sort of politically correct institution that’s just not very effective. So I think that’s the wrong path overall.
\---
But surveillance tends to corrupt your rulers, and it tends to increase the returns to being in charge. I think, over time, it increases the chances of, say, a coup d’état or political instability in China.
Even though you have more stability at the ground level, you may have less stability at the top. I think this is one of the two or three biggest issues facing the world right now: What are we going to do with surveillance and AI, facial and gait recognition? I don’t think we know what to do. I would say I more worry about it than applaud it.
## Moral philosophy
**In _Stubborn Attachments_ your views end up aligning with common-sense morality a lot. Is there a deeper reason why common-sense morality is so often right about these issues, or is it just a coincidence?**
Well common-sense morality evolved and I really wouldn't want to argue that it evolves to exactly the socially optimal point, but at some very gross level there is a kind of group selection. \[...\] If your philosophy totally contradicts common-sense morality, as I think Peter Singer's sometimes does, I think you should start worrying, that maybe it won't actually fare that well if you tried it.
I'm just up front about a framework I think basically everyone shares. I don't pretend to know the full content of the actually fully realised pluralist bundle. it just seems to me that ethics is complex that differences of perspective have persisted for so long between very well-meaning people for literally millennia I think it has to mean that there is the multiplicity of goods and that you know we should care about many of them. \[...\] You wanna look for what are the findable cases where a bunch of values we care about more or less co-move.
Underlying philosophy of MR = belief in excellence.
Russ Roberts: Let's turn to a philosophical question, which is utilitarianism, which you write quite a bit about in the book. I think you define yourself as a 2/3 utilitarian. What do you mean by that?
Tyler Cowen: Well, that was a little tongue in cheek. But, I think if you are looking at a public policy, the first question you should ask should be the utilitarian question: Will this make most people better off? It's not the endpoint. You also need to ask about justice. And you should consider distribution. I think you should consider, say, how human beings are treating animals. You might want to consider other broader considerations. But that's the starting point. And if your policy fails the utilitarian test, I'm not saying it can never be good. But it has, really, a pretty high bar to clear. So, when I said "two thirds," that's what I meant.
Tyler Cowen: Well, this is the Peter Singer conundrum: How can you enjoy that active personal consumption, that chocolate ice cream cone, when, dot-dot-dot people are starving? You've been hearing this from your parents when you are a kid: How can you leave food on your plate when there's hungry children in Africa? Whatever the tale used to be. That's a morally interesting question, but I don't think it's the most relevant question. I think the most relevant question is: What can you do so the global economy grows at a higher rate? And that's going to help the poor, including in other countries more than anything else because of technology transfer, remittances, immigration. Multinationals, hiring people at higher wages, and so on. And if you ask the question, 'Well, what can I do for the poor in my own country, and other countries?' the answers will be to work really hard; try to innovate; save a lot; contribute to highly productive organizations. I do think we should feel a greater compulsion to do those things than we do now. So, I'm willing to bite that bullet. But, that to me is the moral dilemma. You know--not the ice cream cone. If the ice cream cone is what motivates people to produce value because they love ice cream, I say, 'Full steam ahead' with the ice cream cone. I'm worried that we are not innovating enough.
And you've also now invoked religion, with a wink and an asterisk and a--I don't know what else. So, what do you mean by 'transcendent,' how you say; and what do you mean by 'religion'--quote "the good kind"--not any particular god? What does that mean to you?
Tyler Cowen: Well, I think people as biological beings have a lot of programming to think about the immediate and short term. There's plenty of evidence for that. But to get to growth maximization we need this longer-term perspective. Now, how is it we are going to do that? Well, partly we could kind of whip ourselves into submission out of our high time preference, right? But that very often doesn't work. You get people to think about the deeper future, you know, love of descendents, care about higher values in some regard, looking beyond themselves, thinking about broader values. And those, you might call religious in some way. And I think that's the--you know, the whole answer to the problem, actually, whether you identify with a particular formal religion today or not. And personally, actually, I don't, as you know
It’s not that I think the right answer is for everyone to be so attuned to the exact correct moral theory. They’re going to use rules of thumb. We’re going to rely on common-sense morality whether we like it or not — even professional philosophers will, and that’s okay, is one thing I’m saying. Just always seek some improvement at the margin.
If you’re trying to find what’s the intuition you should be least skeptical about, I would say it’s lives that are much richer or happier and full of these plural values to an extreme degree compared to other lives. Even there, we can’t be sure, but that seems a kind of ground rock. If you won’t accept that, I don’t know how there’s any discourse.
Robert Wiblin: Okay, let’s move on to some other things in the book that I wasn’t entirely convinced by. You make the argument in one of the chapters that, even though our actions seem to have very large and morally significant effects in the long run, that doesn’t necessarily mean that we have incredibly onerous moral duties. We don’t necessarily have to set aside all of our projects in order to maximize the growth rate of GDP or improve civilizational stability. What’s your case, there?
Tyler Cowen: Well, I do think you have an obligation to act in accordance with maximizing the growth rate of GDP, but given how human beings are built, that’s mostly going to involve leading a pretty selfish life: trying to earn more, having a family, raising your children well. It’s close to in sync with common-sense morality, which to me is a plus of my argument. What it’s telling you to do doesn’t sound so crazy.
You don’t have to re-engineer human nature. So if someone from more of a Peter Singer direction says, “Well, all the doctors have to run off to Africa,” people won’t do that. We can’t and shouldn’t coerce them into doing that.
The notion that, by living a “good life” but making some improvements at the margin, that that’s what you’re obliged to do, I find that very appealing. It’s like, “Change at the margin, small steps toward a much better world.” That’s the subheader on Marginal Revolution. It’s also a more saleable vision, but I think that it accords with longstanding moral intuitions, shows it’s on the right track.
Robert Wiblin: Yeah, okay. It seems like, given your framework of long-termism, the moral consequences of our actions are much larger than what most people think when they’re only thinking about the short-term effects of their actions. In that sense, the moral consequences should bear on us more than they otherwise do.
1
Robert Wiblin: Yeah. There’s this big trade-off in philosophy between having a simple theory, a parsimonious theory that only has a few pieces, and then being able to match the common-sense intuitions we have about every case or about every claim.
I’m — in the field of philosophy, specifically — in favor of parsimony and against following common sense or having very complicated theories. In other domains, I think we need to use common sense and accept a loss of parsimony. Where do you fall on that spectrum?
Tyler Cowen: I’m a little closer to common sense than you are. It may not have much metaphysical standing, morally speaking, but the world is ruled by common sense. People behave in accord with common sense, so it’s probably counterproductive to stray too far from common sense.
A good ethical theory which has to have a practical component — it should be in accord with a lot of common sense, but revise other parts of it. You need both, and if the theory is either too much just matching the intuitions or totally overturning all of them, I get suspicious.
Again, this idea of “Revise at the margin.” It seems to me how we make progress in science, in business, so maybe it’s how we should try to make progress in ethics too. It has a pretty good track record.
Maybe the fundamental, and indeed, insoluble problem of philosophy is how to integrate the claims of nature with the claims of culture. They’re such separate spheres, but they interact all the time.
The final appendix B of my book, I talk about this problem. How do you weight the interests of humans versus animals or creatures that have very little to do with human beings. And I think there’s no answer to that. The moral arguments of Stubborn Attachments — they’re all within a cone of sustainable growth for some set of beings. And comparing across beings, I don’t think anyone has good moral theories for that.
Robert Wiblin: But it seems like on your view, you should think that, while we don’t know what the correct moral trade-off is between humans and animals, there is a correct moral trade-off. It’s just very hard to figure out what it is.
Tyler Cowen: I’m not sure what we would make reference to to make that trade-off. There’s some intuitionism, like gratuitous cruelty to animals — even not very intelligent ones — people seem to think is bad. That’s easy enough to buy into.
Robert Wiblin: But you support interpersonal aggregation across humans. Then it just seems like there should be a similar principle — though more difficult to apply in practice — that would apply to a chimpanzee and a human?
Tyler Cowen: We’re very far from knowing what is. But chimpanzees are pretty close to humans. That strikes me as quite possible. But if you’re talking about bees and humans . . . What if another billion bees can exist, but one human has to have ongoing problems with migraine headaches? My best guess is we will never have a way of really solving that question using ethics.
Robert Wiblin: Yeah. I agree that the practical problem gets very severe when you’re comparing humans and insects. But I think, in principle, the solution follows the same kind of process as when you’re comparing humans and other humans and chimps.
Tyler Cowen: I’m not sure the practical problem is different from the conceptual problem. I think it’s a conceptual problem, not a practical one. We could hook up all the measurements to those bees we want, and at the end of the day, whether a billion of them is worth a migraine headache for a human . . .
Robert Wiblin: But you say you’re a moral realist. Shouldn’t there be an answer then?
Tyler Cowen: I don’t think there’s an answer to every question under moral realism.
Robert Wiblin: Some listeners might be listening to this and thinking, “Well, yeah, it might be the case that if we grow GDP today, this will also increase GDP in a thousand years time. But I don’t really care about a thousand years in the future.” What would you say to try to convince them that a thousand years in the future does have important moral significance?
Tyler Cowen: Well, imagine our ancestors sitting around, say, a thousand years ago, saying they didn’t care very much about us. And they were willing to accept a growth rate, say, a percentage point lower than what has been the case for the last thousand years.
We would all, right now, be in extreme poverty. We would be suffering. Life expectancy would probably be something like 40 years of age. We wouldn’t have created a lot of artistic and cultural wonders. So there’s a plurality of values that’s supported by economic growth. And that’s the most fundamental thing we should be willing to endorse at a macro level.
Robert Wiblin: Do you think, like me, that there’s a chance that a future technology could make human life just a hundred or a thousand times better than it is for people today?
Tyler Cowen: I don’t know that we have a meaningful metric for saying that, but I suppose I don’t think that’s possible. I think we can make it twice as good and quite a bit longer, but I don’t think it will be inconceivable to what we can imagine now.
Robert Wiblin: What about if we imagined that we find a way for people to take the best, most enjoyable drugs that they can take today without having negative effects on their brain in the long term? It seems like that could result in a life that’s 10 times better than what people typically experience today, at least in some narrow sense.
Tyler Cowen: I think if you’re a pluralist, that life is maybe not better at all. It has more pleasure, but these other plural values seem to be weaker because you’re pursuing only pleasure. So that may be a dystopian scenario for a true pluralist.
Robert Wiblin: Yeah, I suppose. Well maybe we could push it out in that way on all of these margins. You get many different things, but we do that much more efficiently.
Tyler Cowen: Some people specialize in drug taking, I’m fine with that if it’s not harmful, but I don’t want the whole world to become lotus eaters.
I’ve characterized you as kind of a total global, objectivist consequentialism, plus respectful of a nonaggression principle. Do you think that’s a decent summary?
Tyler Cowen: That’s very close. I would say respect for human rights. The human rights may or may not always be defined by the nonaggression principle. I think, for the most part, they are.
I think there are objective rights which people hold, and we should respect them. And we maximize growth within that framework.
Robert Wiblin: So what fundamentally is the philosophical argument in favor of human rights?
Tyler Cowen: It’s intuitionist, I think, that there are simply some acts that are so horrible, a person who doesn’t see them is horrible. It would be hard to have further discourse with them.
Tyler Cowen: Derek Parfit, in his 1984 book, Reasons and Persons, had an example that, say, you would bury nuclear waste, and several generations from now, the waste would, say, kill millions of people.
But the fact that you buried the waste would change the timings of subsequent conceptions. So the people who are being killed a million or, say, a thousand years from now, they wouldn’t have been born otherwise unless you had buried the waste.
You could argue, “Well, I haven’t harmed anyone at all. By burying the waste, I caused them to be born.” They die of a terrible cancer when they’re 27 years old, but on net, this is still following the Pareto principle.
I think I have an argument why that’s wrong, namely the case where you don’t commit the very harmful act. You might have different identities of people, but you’ll have a much greater aggregate of good in the more distant future. And it’s not about individual identities. So there’s something a little oddly collectivist about my argument, you might say.
Tyler Cowen: Here’s a tension I think that we all have to face up to. Parfit talks about something called the person-affecting principle. How does your action affect some particular person?
But if you’re willing to make aggregate judgments and engage in an active aggregation, saying some kinds of societies are better than others or some policies are better than others, there’s something in the micro foundations of that judgment that’s fairly nonindividualistic.
People want to be consequentialists, and they want to be pure individualists sometimes. It’s not actually a fully happy marriage of views. And the notion that, once you jam together different measurements of well-being, you’re making a collective judgment about the overall course of history even slightly Hegelian. You could also think of this book as a Hegelian defense of liberty.
It’s always instructive to look at how people behave as parents or maybe how they vote in a department when they’re dealing with their colleagues. And they’re some form of consequentialist in all those cases. If you take the intuitions they’re using in these smaller decisions and just build them up onto a larger scale, I think the logic of consequentialism is very, very hard to escape.
And when people say, “Oh, I’m a deontologist. Kant is my lodestar in ethics,” I don’t know how they ever make decisions at the margin based on that. It seems to me quite incoherent that deontology just says there’s a bunch of things you can’t do or maybe some things you’re obliged to do. But when it’s about more or less — “Well, how much money should we spend on the police force?”
Try to get a Kantian to have a coherent framework for answering that question, other than saying, “Crime is wrong,” or “You’re obliged to come to the help of victims of crime.” It can’t be done.
I think addiction is an underrated issue. It’s stressed in Homer’s Odyssey and in Plato, it’s one of the classic problems of public order—yet we’ve been treating it like some little tiny annoyance, when in fact it’s a central problem for the liberal order.
\---
Tyler Cowen: It seems to me there is something valuable about humanity reaching its highest potential, say, through the works of classical music or some of the greatest painters, that is not strictly reduceable to the number of people paying money for it or enjoying it at any point in time. At some points in time, that number may be zero.
Simply having achieved certain kinds of semi-perfectionist peaks, to me, is part of the pluralist bundle. But again, I think it’s important that we have arguments robust to those who are skeptical that that should count at all.
Robert Wiblin: Do you worry about this reliance on intuitions about the value of particular things? Or how we are to respond to particular cases is vulnerable to evolutionary debunking arguments that . . . It’s like, we think that streams are particularly beautiful or fertile lands look particularly beautiful.
It seems like we don’t really want to say that, in some fundamental, objective sense, all aliens, for example, ought to value the appearance of streams or paintings of natural scenes. That seems like a very idiosyncratic human thing rather than a fundamental moral principle. What do you make of that?
Tyler Cowen: Philosophers often overuse ethical intuitionism. Sometimes I’ll read a Philosophy and Public Affairs piece, and I’m always wishing they would write down axioms and argue for or against the axioms, but here’s one comparison they make, and then another.
\---
So it’s getting back to the question of comparing a billion bees to one person having a migraine headache, and I just don’t think we can do it. That moral realism can’t handle utility comparisons across very different kinds of beings.
\---
If you think of ethics as making sense within some sphere, within some background some set of suppositions, and one of those is simply that human beings exist you can then think that within that context certain rights are quite absolute, but if you need to engage in a mass rights violation just to save humans from going extinct that becomes permissable.
Deontology works better in small societies with no potential for economic growth.
There could be a Straussian argument for not talking too loudly about the exceptions because on average they are likely to be abused for public choice reasons.
\---
After some point none of us are able to care about the distant future… I don't think it's actually a sign that you're reasonable or rational, even though it's the correct point of view… none of us are actually good enough to think that, so the way we get there is by having a kind of faith that the distant future matters.
\=> Faith as bridge between system 2 and system 1.
\=> Economic growth is faith based as an empirical matter deriving from how human beings really are – that is incapable of being good enough to really give a damn about 170 years from now, that's one of the key messages of the book.
You have to have doctrines we can believe in. The way you convince people is not always by giving them the facts, you have to make a faith based argument.
Yes to growth-enhancing redistribution. Put resources wherever they'll compound to create the most value.
Balance – don't model masochism. GDP (as wealth +) is the enjoyment.
\---
Tyler: Well I’m not a utilitarian per say. I would say I’m a consequentialist but there’s a relativistic element to my consequentialism. So questions like, “How many happy plants are worth the life of one baby?” — Maybe there can never be enough. But, I suspect the question just isn’t well-defined. How many dogs should die rather than one human being? I don’t even know what the units are. So, I think the utilitarian part of consequentialism only makes sense within frameworks where there’s enough commonality to compare wellbeing.
Nick: Finally, you’ve often said that most political disputes are really disputes about who gets status. Nominate a few things or people to which we should give more status?
Tyler: Everyone. Everyone pretty much deserves more status (not Hitler, not mass murderers) but most things are underappreciated and they’re criticized and praise motivates people and helps them have a sense of fitting in and to go around and appreciate and express your appreciation for what you really value, that’s one of the best things you can do with your life.
\---
A new child in no sense at all substitutes for a recently dead loved one. Marginalism is out of its depth here, because not all value concepts are hospitable to the idea of trade or substitution or replacement. The most fundamental principles of human life lie outside the scope of economic thought, and are instead situated in philosophy.
\[ tyler replying to Agnes \]
When it comes to valuing lives, different lives are either commensurable or they are not, again as I discuss in the book. If they are not, nobody is going to produce meaningful rankings of different social states of affairs, not even by summoning up the mysterious ghost of “philosophy.” If human lives are commensurable in some way, we are back to sustainable compounding growth as giving us a decisive answer. When it comes to real world policy, we must indeed choose, and it is simply punting to claim there is no basis for comparison or trade-offs. Economics and the logic of social choice return, whether we like it or not.
Tyler challenges me: if we don’t compare the value of human lives, how can we possibly decide how much medical care the government should provide? I respond: There are many ways! Philosophy teaches that such a comparison is but one way to make a decision. Since this isn’t an essay in systematic philosophy, I won’t aim for coverage of the field, but instead focus on one plausible contender: the concept of bounded obligation.
I do not find myself stymied by the demand to make intelligent, well-grounded decisions about how much educational care to provide, even given my refusal to make invidious comparisons about the comparative value of my students’ minds.
## Advice
\- TC classic advice for young people: first, get one really good mentor, ideally two or three. And second get a small group of really good friends that you love talking to. "Small group theory and mentors, that's my generic advice".
Why are mentors so important? "I think they only give you a few things but those things are so important. I think they give you a glimpse of what you can be, and you are oddly blind to that in the absence of those mentors, even if you are very very smart. So I think the rate of return to good mentors is just absolutely enormous. You don't need many, choose them wisely, more than one is okay, actually ignore most of what they say, but what you get from them will be so important. \[...\] I think usually it has nothing to do with them telling you you they might tell you whatever BS who knows, right. But it's what you see. \[...\] When I was young \[I think 14\] I met Walter Grinder and he had tried to just read as many books as possible. And just the notion that you could be a human and you could do that, I got from him that was a huge influence." couldn't we just get that kind of influence from watching you on YouTube? "Well, you tell me I think to some extent, but I think having flesh and blood mentors is very important, but again it's a portfolio approach, you want both and now you can get both."
To revisit this idea of reading people and absorbing the platform-agnostic sensibilities of people, as you are doing that, do you start to get these kind of like avatars of people in your head, and then when something new happens to you or you’re like at a restaurant, you kind of have these avatars of these people that you've studied in depth and you sort of can say like, “What would this person think about this Thai food or this new book that I'm reading?” Do you start to like really absorb these people's perspectives and have them play off of each other in your head?
\[00:59:41\] TC: I’m a big advocate of that, and for 30 years I’ve taught my students. I call it the Phantom Tyler Cowen. You want to have the Phantom Tyler Cowen sitting on your shoulder the rest of your life when you're thinking about economics or maybe a few other things, and then I found out maybe two years ago, like Peter Thiel and Mark Andreessen, they once had some kind of chat where they talked about exactly the same thing. But I think it's one of the best ways to learn is to develop these internal mental and emotional models of what other people would say with respect to what choices you're facing or what thoughts you’re having or what research paper you’re writing.
Small group very smart friends, find one or two really good mentors who will help you out and actually care about you. I think those things are at least 5x more important than they're made out to be. I think that obsessing about what school is a bit overrated. I think that's good advice for most ppl.
Tyler Cowen: I think most people are actually pretty good at knowing their weaknesses. They’re often not very good at knowing their talents and strengths. And I include highly successful people. You ask them to account for their success, and they’ll resort to a bunch of cliches, which are probably true, but not really getting at exactly what they are good at.
If I ask you, “Robert Wiblin, what exactly are you good at?” I suspect your answer isn’t good enough. So just figuring that out and investing more in friends, support network, peers who can help you realize that vision, people still don’t do enough of that.
\---
When someone asks you for advice whether about relationships or other matters, I think often they do not want advice. They want the feeling that they have exhausted their options that they have processed all of their alternatives, to build their confidence to just do a thing \[that they’ve somehow already decided to do, even if they’re not fully aware of the decision yet\].
So when you give advice you’ve got to realise that they’re probably not going to listen to you, even if what you say is awesome. So when you give advice you’re kind of placebo, a useful placebo. So it doesn’t mean you should you should tell them lies, but you also need to think about how are you presenting the material, should you be giving the person confidence to follow their inner self or not, and think it through from a meta angle — like what is my advice doing here, and consider it strategically, don’t just say what you think is the best thing they ought to do.
\----
At critical moments in time, you can raise the aspirations of other people significantly, especially when they are relatively young, simply by suggesting they do something better or more ambitious than what they might have in mind. It costs you relatively little to do this, but the benefit to them, and to the broader world, may be enormous. This is in fact one of the most valuable things you can do with your time and with your life.
\---
what kind of stories should we be suspicious of? Again, I'm telling you it's the stories, very often, that you like the most, that you find the most rewarding, the most inspiring. The stories that don't focus on opportunity cost, or the complex, unintended consequences of human action, because that very often does not make for a good story. So often a story is a story of triumph, a story of struggle; there are opposing forces, which are either evil or ignorant; there is a person on a quest, someone making a voyage, and a stranger coming to town. And those are your categories, but don't let them make you too happy. (Laughter) As an alternative, at the margin - again, no burning of Tolstoy - but just be a little more messy. If I actually had to live those journeys, and quests, and battles, that would be so oppressive to me! It's like, my goodness, can't I just have my life in its messy, ordinary - I hesitate to use the word - glory but that it's fun for me? Do I really have to follow some kind of narrative? Can't I just live? So be more comfortable with messy. Be more comfortable with agnostic, and I mean this about the things that make you feel good. It's so easy to pick out a few areas to be agnostic in, and then feel good about it, like, "I am agnostic about religion, or politics." It's a kind of portfolio move you make to be more dogmatic elsewhere, right?
don't fall into the trap of thinking because you're agnostic on some things, that you're being fundamentally reasonable about your self-deception, your stories, and your open-mindedness. (Laughter) \[Think about\] this idea of hovering, of epistemological hovering, and messiness, and incompleteness, \[and how\] not everything ties up into a neat bow, and you're really not on a journey here. You're here for some messy reason or reasons, and maybe you don't know what it is, and maybe I don't know what it is, but anyway, I'm happy to be invited, and thank you all for listening.
[https://www.ted.com/talks/tyler\_cowen\_be\_suspicious\_of\_stories/transcript](https://www.ted.com/talks/tyler_cowen_be_suspicious_of_stories/transcript)
## Writing style, thinking style
"I am good at being either blunt and to the point or Straussian and complex which is clear in a very roundabout way but doesn't look clear to the uninitiated."
RW: \[...\] Your top priority would be figuring out ways to coordinate humans better, and indeed that is a really high priority for people in the effective altruism community and many people who are working with this long termist framework elsewhere. Do you think you that you might want to write a book about how to improve coordination and international cooperation in the future?
Tyler Cowen: Maybe. That may not be an issue that’s good for a book, of course. Some issues you write about but not necessarily in book form.
\---
Years ago, you wrote that, in order to enforce a level of epistemic humility on yourself, which you think is appropriate, you tried to be extremely reluctant to move your credences out of the range of 40 percent to 60 percent, on controversial issues at least.
I found that that really stuck in my head for many years and became a bit of a rule of thumb to me, that when I see my credences moving out of the 40 to 60 percent range, I have to stop and really pause and think about whether the evidence is strong enough. Do you still try to follow that principle? And if so, how do you go about it?
Tyler Cowen: I try all the more. I think the best way to go about keeping epistemic humility is to try to write out the arguments of the side you disagree with it. In part, I use Marginal Revolution as a vehicle for that. It’s a selfish use for me.
\---
People don't like it they like to push that stuff away keep things neat and easy to deal with. What I call the philosophy of once and for all ism. They want to be done with stuff once and for all. But that rarely works. \[...\] It's a mess and that's ok.
\---

## Artificial intelligence, AI safety, Artificial General Intelligence
If I go to Spotify or Netflix, they don’t even recommend stuff I want to hear or see. Right now, A.I. is a bit better than a glorified cash register, but it seems to me quite far from being a potentially destructive force. The real danger to me is evil humans operating machines including A.I., and them inflicting the harm. So the idea that the machines are going to take the initiative just seems very distant. And, if there’s that much destructive power, I’m way more worried about the humans who have a much worse track record.
## Current growth rates, stagnation
You have a thing called general purpose technologies, one of those being fossil fuels plus machines, which became significant in the 19th century, and then you have the big growth spurt. You do everything you can, say, with fossil fuels and machines: you get cars, you get planes, electricity, powerful factories. But at some point, your cars only get so much better. And then you wait for the next big breakthrough.
The next set of big breakthroughs may well involve the Internet, artificial intelligence, Internet of things. They are not quite here yet. You see many signs of them. They don’t yet make the growth rate much higher, and then you will have a big period of explosive growth and then a slowing down again. That’s my basic model.
## Production function
TC plans a lot of open space in his days.
Just one commitment: write every day. Literally every day. Don't have to worry about how much, if you do it every day you'll get better and faster.
Spends a lot of time writing out exploring views disagree with. That might be another major production function advantage.
Answering email is my business model.
Perrell suggests TC is a kind of Bismarck figure, TC doesn't push back.
Wants to be "the most successful economist to use the internet as a platform to foment broad enlightenment."
Individuals should do a lot more of the stuff that companies do, including measurement, mission statement,vision, etc.
Sama thinks that fast email is a signature that a person is always taking in information from outside world, that thats where you get most of your good information, I tend to agree with that. People who let their inboxes lie fallow like for two weeks, I guess I think less of them for that.
## Stubborn Attachments
Tyler Cowen: The underlying message of the book is simply, we’re capable of making rational judgments about what is better for society. In my own discipline, economics, there’s a long-standing thread of skepticism about that. Kenneth Arrow developed an impossibility theorem. There are a lot of results that imply you can’t say much about what’s actually better.
So this book is a synthesis of economics and philosophy, and it’s trying to argue to both economists and philosophers, but also ordinary readers, there is such a thing as what is objectively good. It is based on the idea of supporting economic growth. That’s the one thing that, over time, we can say is much better than the alternative of not having as much economic growth.
“We can already see that three key questions should be elevated in their political and philosophical importance. Namely: number one, what can we do to boost the rate of economic growth? Number two, what can we do to make civilization more stable? And number three, how should we deal with environmental problems?”
Tyler Cowen: The classic aggregation puzzle in economics is simply there’s a policy — some people are better off, some people are worse off. How can you possibly judge if the policy is worth doing? There are hardly any what we, as economists, call Pareto improvements, policy changes that make virtually everyone better off.
But over a span of a few generations, if you have a higher rate of economic growth, people today in today’s wealthier world are really, as a whole, obviously much better off than, say, people in the 18th century or even the 19th century. That’s an aggregation judgment we can make. I think it would be supported by people’s demonstrated preferences as to where they want to migrate.
And there are some obvious moral facts that, if the standard of living is, say, three to five times higher in one society rather than another, the wealthier society is better.
Tyler Cowen: I think the most plausible alternative to my view is simply to say the actual time horizon is not very long, that maybe in an extreme case, either the world will end soon or history will start collapsing and run in reverse. So there is no grand, glorious future that has a heavy weight in the calculation. And thus, we’re always dealing with the here and now, a quite pessimistic view.
When we’re talking about growth here, we might imagine time on the X axis and welfare being generated in the universe on the Y axis, and you want to increase that faster. Why focus on increasing the rate rather than making sure that that doesn’t go to zero?
Tyler Cowen: Well, keep in mind the core recipe is the rate of sustainable economic growth. If it’s going to go to zero, you’re knocked out of the box. So you’re maximizing across both of those dimensions, and I think, empirically, there are a large class of cases where more growth and more stability come together.
I would note that earlier versions of this book — you know, I worked on this for about 20 years — the earlier versions had much, much more on existential risk, and it took me years to cut those out. I never repudiated any of the ideas. They just came up in enough other books. I felt I wanted to stick to my core notion of growth more than existential risk and stability.
You and many other people in the effective altruism movement have written on existential risk, and I endorse most of that. But just at the margin, it seemed to me growth was underestimated.
What would Tyrone have to say about the book?
Tyler Cowen: Well, I think Tyrone would endorse the pessimistic view, that the future is not so grand and glorious. It doesn’t have the moral power I attribute to it, and that we just ought to have more of a kind of Nietzschean scramble for the here and now, and there is no final adjudicator of these clashing values.
Morality becomes not so much deontological, but for Tyrone, it would become relativistic and almost nihilistic. That’s what Tyrone said to me about this book. He bugs me all the time. I try to shut him up, but I can’t do it.
So every single thing you do, including our discussion, remixes the future course of world history. If you’re a consequentialist, you need to take that seriously. You need to ask, “Does this simply make my entire doctrine incoherent?”
The stance I take in the book is, if you’re pursuing this truly large significant grand goal of making the future much, much better off in expected value terms, that will stand above the froth of the uncertainty you create by remixing things with every particular decision.
as I argued in my earlier book, The Complacent Class, there now seem to be so many people who are simply satisficers. They’re not very interested in innovating or even participating in a dynamic economy, and they just try to do well enough. I’m here making a moral argument that at the margin, many, many people should be less complacent and take more chances. Personally, I will lower aggregate societal risk and do more to innovate, save more, work harder, in some way be more dynamic. You can think of this and Complacent Class as two sides of a bigger picture. Complacent Class is like the sociology of what we’re doing and this is the moral side.
\--
So we have very immediate successes near us. Could we do better? Absolutely, but the idea of there being this general public movement where you get people to do the right thing by scaring them, I think that’s the opposite of how politics usually works. Voters like to live in denial, and if you scare people too much with, say, climate change, they respond by thinking it’s not actually all that significant.
I think some kind of more positive vision — you’re more likely to get people on the sustainability bandwagon. That’s one of the backstories to my book: I’m trying to give a positive vision, emphasizing less scaring the heck out of people and more, “Here are the glories at the end of the road, what you can do for your descendants and world history.” Scaring people seems to backfire in politics.
\---
TC: The idea that we as humans have stubborn attachments to other people, to ideas and to schemes and to our own world and then trying to create a framework that can make sense of those and tell people it's rational and they ought to double down on their best stubborn attachments and that that's what makes life meaningful and creates this cornucopia in which moral and ethical philosophy can actually make sense and give us some answers.
Tyler Cowen People always think they’re more right on average than they are. This is true of everyone. If it’s true of everyone it has to be true of me, so I wanted to build a set of arguments that in some way were robust to me being wrong most of the time, and that’s hard to do. If you’re wrong most of the time, your arguments are wrong most of the time. But is there some meta-level where there’s a claim you can make that is taking that into account in some way.
\---
This book is coming out of the tradition of social choice theory. How can we say - ever - that one outcome socially speaking is better than another.
\---
You can think of a lot of the book \[Stubborn Attachments\] as applying the magic of compounding returns to ethics. Super simple, but academic philosophers tend not to do that.
\---
Wealth is a buffer against tragedy.
## Progress studies
Progress itself is understudied. By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries.
Organizations as varied as Y Combinator, MIT’s Radiation Lab, and ARPA have astonishing track records in catalyzing progress far beyond their confines. While research exists on all of these fronts, we’re underinvesting considerably. These examples collectively indicate that one of our highest priorities should be figuring out interventions that increase the efficacy, productivity, and innovative capacity of human organizations.
critical evaluation of how science is practiced and funded is in short supply, for perhaps unsurprising reasons. Doing so would be an important part of Progress Studies.
Progress Studies has antecedents, both within fields and institutions. The economics of innovation is a critical topic and should assume a much larger place within economics. The Center for Science and the Imagination at Arizona State University seeks to encourage optimistic thinking about the future through fiction and narrative: It observes, almost certainly correctly, that imagination and ambition themselves play a large role. Graham Allison and Niall Ferguson have called for an “applied history” movement, to better draw lessons from history and apply them to real-world problems, including through the advising of political leaders. Ideas and institutions like these could be more effective if part of an explicit, broader movement.
An important distinction between our proposed Progress Studies and a lot of existing scholarship is that mere comprehension is not the goal. When anthropologists look at scientists, they’re trying to understand the species. But when viewed through the lens
of Progress Studies, the implicit question is how scientists (or funders or evaluators of scientists) should be acting. The success of Progress Studies will come from its ability to identify effective progress-increasing interventions and the extent to which they are adopted by universities, funding agencies, philanthropists, entrepreneurs, policy makers, and other institutions. In that sense, Progress Studies is closer to medicine than biology: The goal is to treat, not merely to understand.
Role of aesthetics in progress studies
Central!
## Emergent ventures
Partly inspired by study of Florentine renaissance and how artists then were supported. Maybe modern foundations have too much feedback.
The idea of Emergent Ventures is to create a philanthropic fund which will support projects that are maybe too weird or too small or too foreign or have results that are too hard to measure to be accepted by other major foundations.
So people are applying to Emergent Ventures. The final decision maker is myself. There is a minimum of bureaucracy. There are no layers of approval people must go through for me to see the proposal. And we are just now starting to hand up grants. Think of it as a kind of pop-up philanthropy.
## Biography
\- Thomas Schelling was TCs doctoral advisor. First person to ponder MAD scenarios in print.
TC really is hyperlexic, taught himself to read aged 2. Really can prob read 5-10x faster than most peers. Ppl have stopped doubting that I understand what I read now that I do podcasting.
Robert Wiblin: And throughout your career, how much has doing good guided your choice of what to work on?
Tyler Cowen: Very little. I’m quite a selfish person, I think, and I enjoy pursuing my own curiosity. Part of me at the meta level hopes that does some good, but I don’t think altruism is really, for me, a fundamental driving force. I enjoy absorbing information and communicating it to other people. And that’s, for me, what is fun.
## Other people
TC: I don't think of Peter Thiel as a tech person, I think of him as a humanities person with maybe the deepest understanding of the humanities that is out there now, of anyone. I think it's not an accident that Peter is fairly religious, that has embraced René Girard, I mean he has a degree in philosophy from Stanford and a law degree. Understanding the humanities side of what is going on in American society right now he to me has been thinker number one for some time. Thiel may have the best BS detector of anyone. Best selector of talent that America maybe ever, at least the last 50 years, has seen. For that you need a pretty deep understanding of the humanities. Bicultural bilingual is an underrated source of Thiel edge.
Patrick Collison isn't about a set of views, he's about a way of thinking, which he calls fallibilism.
Gurri: when you can see political leaders in great detail on the internet they inevitably decline in status, too much scrutiny is not good for anyone's reputation. They're brought down to a lower level, we realise their mediocrity or they appear mediocre even when they're not.
On Glen Weyl: i Think simpler political rules tend to be better, they're mainly there to produce legitimacy and give people a stake in being involved, and something like quaratic voting is too complicate, and the fact that not even a local bridge club has seen fit to introduce quadratic voting … I don't think he takes that seriously enough, so I think he should be a bit more Burkean.
Bostrom one of smartest people today. He developed and popularised existential risk. Ppp think they can make things more robust. I'm skeptical.
I think where \[Derek Parfit\] was most important is simply being the walking, living, breathing embodiment of the philosopher who is obsessively curious and will plumb the depths of any argument to such an extreme degree like has never been seen before on planet Earth. He was just remarkable, and that’s why he and his work have influenced so many people. I’m not sure which of his conclusions stand up, or even what his conclusions are. He’s not about conclusions; he’s about philosophizing in the Socratic sense. For that, he was just such a marvel. I wish more people could have known and seen and heard him.\\
Tyler Cowen: We have here to interview me, Robert Wiblin, who is one of the interviewers I most respect, and indeed, envy.
Robert is a long-standing leader in the effective altruism movement. He runs an excellent podcast called the 80,000 Hours podcast. And he is from Adelaide.
Nick Bostrom has an engineering mentality to his work that Parfit never did. Like, “What can we do? What should we do? How do we apply resources?”
Rawls influenced a lot of people, but when you read Rawls on growth and the future, it’s incoherent. Rawls is afraid of economic growth. At times, he seems to endorse a stationary state because any savings makes the first generation worse off, and they’re the least well-off people. That to me is a reductio on Rawls’s argument, the entire argument.
The way in which Rawlsian — like, the principles of liberty, and then maximin principle and what can be good for everyone — the way all those interact, I tend to think, is not coherent, and there are many sleights of hand in A Theory of Justice, not just the problem with economic growth and future generations and savings rates. It’s a brilliant book, how well he disguised those. It’s a kind of master class in the philosophy of disguise, is how I admire the book.
A zero discount rate, some economics and a dose of game theory considerably bridges the gap between consequentialism / utilitarianism and common sense morality. People were too distracted by the wonderful rhetoric of Bernard Williams. It's less of a problem than Williams thought.
Hilary Putnam taught the single best class I ever took in my life, on philosophy of language.
## Misc
\- "I don't think we understand historical contingency very well."
If the world economy grew 4-point-whatever percent last year, way more of that, say, 4.8 percent, is coming from better management, better institutions than is coming from new technology. Maybe 1 percent of it is coming from new technology and the rest from better management — in some cases, growing population, capital resources.
So institutions are way out-racing technology right now. Again, I’m not taking that for granted, but I think people would be much more optimistic if they viewed it in that light.
But the idea of there being this general public movement where you get people to do the right thing by scaring them, I think that's the opposite of how politics usually works. Voters like to live in denial, and if you scare people too much with, say, climate change, they respond by thinking it’s not actually all that significant. I think some kind of more positive vision — you’re more likely to get people on the sustainability bandwagon.
That’s one of the backstories to my book: I’m trying to give a positive vision, emphasizing less scaring the heck out of people and more, “Here are the glories at the end of the road, what you can do for your descendants and world history.” Scaring people seems to backfire in politics.
I think this is one of the two or three biggest issues facing the world right now: What are we going to do with surveillance and AI, facial and gait recognition? I don’t think we know what to do. I would say I more worry about it than applaud it.
How to make philanthropy better. Everyone says measure results. I think it’s become a cliché. It’s trivial. It’s begging the question. What is it you’re measuring? What counts as good? What counts as bad?
Might it not be the case, a lot of initiatives will do better by not trying to measure results and do better getting outliers rather than homogenizing . . . with everyone running after the same easy-to-measure kinds of results. So okay, if it’s not measure results, what is it then? We need more philanthropic experiments and to think about them critically.
I think one of the big trends in the world today is that these super powerful countries are much weaker compared to mid-level emerging economies than they were 30 years ago. So a place like Turkey or Saudi Arabia — in total military and geopolitical terms, those countries have much more clout than they did not long ago.
And that makes the United States, China, also Russia quite a bit weaker. People don’t talk about this much, but you have many more regional powers with a lot of sway. Maybe that’s stabilizing. I don’t know.
Tyler Cowen: People generate new ideas, and most new ideas don’t disappear. You can lose a new idea or have a Dark Ages. But if you have good institutions, you build upon those new ideas. Also, you can have increases in labor, supply, and capital. Think of those as some key sources of economic growth.
Tyler Cowen: Absolutely. I think of culture as one of the keys behind economic growth in fact.
Robert Wiblin: Okay. You then go on to argue that, against the things that some people have said, wealth actually does lead to happiness. So we’re not just creating wealth for its own sake, but actually it’s going to increase welfare. What’s the case for that?
Tyler Cowen: If you look at the data within nations across classes of income or wealth, wealthier people are simply much happier than poorer people. There is a partial paradox: When you look at data across nations, you find a lot of poorer countries where people report they’re pretty happy. But I think what’s going on there, often, is they’re just using words differently.
For instance, if you polled Kenyans, “How happy are you with your health care?” Kenyans actually polled as being pretty happy with their health care. It’s not that Kenyan health care is so much better than we all think. They’re just used to a lower standard. So I think when you ask people about happiness across countries, there’s still a positive slope on that relationship. But you’re understating just how good wealth is for people.
Wealth also helps keep people alive. So all these polls — you’re only polling the living, not polling the dead. If you could poll all the dead people who passed away because new medicines were not invented for them or they took a riskier job and they died in an accident, put all those people into the poll. Again, wealth is going to do much, much better.
Robert Wiblin: I’m not entirely sure how to interpret that well-being literature. It seems potentially still an open question just how much wealth increases happiness today. But I feel like you almost didn’t make the strongest argument that you could make here, which is that, even if increasing GDP or wealth today doesn’t make people happier now, at some point in the future, it will, once we use that wealth and that greater knowledge to figure out new technologies that can turn our wealth into welfare.
I think there’s a fear amongst a lot of philosophers that if you’re too willing to aggregate, utilitarianism becomes too strong and they don’t like all of the consequences of that decision.
I haven’t yet read a really good critique of meritocracy. There’s plenty you can say against it. But, as with Churchill on democracy, it seems that all of the other systems are worse. I would stress the point that no meritocracy is ever quite presented as such, that in all social systems there are cushions and pillows for people’s egos and a true meritocracy where everyone knew their exact worth would, in fact, be psychologically intolerable. But, we’re also incapable of producing that. So it’s really about trying to have a system that actually rewards merit while not forcing people to quite face up to the fact so explicitly. And, it seems to me we have not gone too far with meritocracy, properly understood.
Elena Ferrante, the four volume Neapolitan Quadrology and Knausgard’s My Struggle, volumes 1 or 2. Those, to me, stand up with the greatest novels of the 21st century.
the idea that \[...\] to make progress you have to give up everything you hold dear. I find that unsettling. I hope it’s not true.
# Interview notes
## Stanford lecture: Arguments against Stubborn Attachments
[https://youtu.be/EO5jJFpbJvg](https://youtu.be/EO5jJFpbJvg)
Tyler Cowen discussing his main uncertainties about Stubborn Attachments. This helped me understand some of the reasons Cowen seems a bit less focussed on safety (e.g. reducing existential risks) than some other figures involved in effective altruism. It seems like the difference mainly comes from:
(a) Cowen thinks the human future is big (centuries), but not astronomically big (>millions of years). He either (i) doesn't think there's even a small possibility of an astronomically big human future, or, if he does, there's (ii) some other consideration that stops this possibility being a weighty consideration in favour of safety.
(b) Cowen is very unsure how to think about the value of future civilisations that are quite unlike our own. This is clearly doing some of the work in (a)(ii).
(c) Cowen is pessimistic about our ability to manage technological risks, no matter how hard we try.
Elsewhere I've heard Cowen claim that general economic growth is quite strongly correlated with safety, so for most people, "maximise economic growth" is the best maxim. I find this compelling. It's hard to tell how much substantive disagreement there is between a Cowen and a Bostromian worldview
### Selected highlights
\> My actual view is that probably we'll have advanced civilisation for something like another 6 or 700 years. Very approximate view pulled out of a hat, it's an intuition, but it's not forever and the world's not going to end next year. It means there's not so much value out there that we should just play it safe across all margins. \[...\] I've thought about value maximisation vs safety a great deal since writing the book and I've found I have this funny intermediate view that I'm actually not that optimistic in a way but it's the finiteness of our end that allows us to take some chances normatively with the world.
\> I think we should devote many more resources to limiting chances of nuclear war. My fear is this: once this stuff is invented, I tend to think it's gonna happen sooner or later, it's hard for me to imagine there's something we could do to postpone it like another 5000 years
\> Total or per capita? I don't know, and it bugs me. When it comes to weighting I don't think I have a clear answer.
\> How to deal with animals? Noone actually wants to put animals into the social welfare function. How many hamsters are equal in value to a person? I don't think there's an answer to that question. \[...\] Why are there different units? Is it the case you can't really make comparisons across different animals different people different worlds. \[...\] But once you do that you then realise these other consequentialist calculations they're embedded in some bigger picture view \[...\] you might wonder why don't you take that embeddedness and apply it a bit more directly to all those moral comparisons you were trying to clear up with all your silly talk about maximising the rate of sustainable economic growth and again I'm back to I don't know with that.
\> If you think one of the next things growth will do is make us fundamentally different through something like genetic engineering or drugs … we're not then animals but we could then be fundamentally different beings that are outside of the moral cone I'm used to operating in and then I'm back to not knowing how to evaluate it. There's some like common moral cone that has the humans not the hamsters, future growth could push what are now humans outside that cone and then I think I'm back to a kind of incommensurability. Meantime I think in terms of EV, if it doesn't happen we'll just be better off, if it does we're not sure, so full steam ahead is where I'm at. But I do think about this quite a bit.
\> This moral absolutism that my goodness the discount rate has to be zero. Can't it be a smidgen more than 0? I don't know that bugs me rubs me the wrong way makes me feel like I don't have the whole framework.
\> I can't push intuitionism out of my argument all together because I don't find utilitarianism self-evident. And then I'm back to messy pluralism and tradeoffs.
### All highlights
If you think we could endure forever, sustainability becomes the key thing. If you think we're done in a few years, maximising growth doesn't matter. It's in the intermediate scenario that maximise economic growth makes sense.
Agree with Greta Thunberg that eternal economic growth is a fairy tale, but think case for 700 years of growth is quite strong.
within consequentialism, different moralities are correct at different times (for guiding individual actions).
Don't believe in space settlement think we'll destroy ourselves first.
Time horizon is just really important
Does Economic growth revert to the mean? Solov growth model. Fast growth then special interest groups accumulate. Silicon valley is living mean reverting growth I suspect. Shortens time horizons
Bill Easterly persistence of per capita income for a millenia
Deep roots literature
Big worry:
Total or per capita? I don't know, and it bugs me. When it comes to weighting I don't think I have a clear answer.
All time gains or once and for all gains?
Does the rate of discount really have to be zero? Cna't it be 0.00001%?
How to deal with animals? Noone actually wants to put animals into the social welfare function. How many hamsters equal to a person? I don't think there's an answer to that question. Why are there different units?
Is it the case you can't really make comparisons across different animals different people different worlds. There just aren't systematic ways we can make deals with different worlds.
Other consequentialist views are embedded in assumption of same world.
I can't push intuitionism out of my argument all together because I don't find utilitarianism self-evident. And then I'm back to messy pluralism and tradeoffs.
Economic growth and moral growth co-move.
Short run weak correlation between economic growth and happiness but long run extreme correlation.
If you think one of the next things growth will do is make us fundamentally different through something like genetic engineering or drugs … we're not then animals but we could then be fundamentally different beings that are outside of the moral cone I'm used to operating in and then I'm back to not knowing how to evaluate it. There's some like common moral cone that has the humans not the hamsters, future growth could push what are now humans outside that cone and then I think I'm back to a kind of incommensurability. Meantime I think in terms of EV, if it doesn't happen we'll just be better off, if it does we're not sure, so full steam ahead is where I'm at. But I do think about this quite a bit.
Pessimistic view is: kids just aren't that fun, Amazon delivery and Netflix is better. But I'm not that pessimistic.
How can we make having kids much more attractive? I think it's an important part of social science research.
Career / act advice from Cowen framework: invest in your own human capital would be dominant recommendation, save more,
Wealth is insurance against catastrophe.
Consequentialists are held hostage to empirics. I don't mind being held hostage to the empirics. We're all being held hostage to our speculations about how dangerous the future is, and these are not very reliable. I bite this bullet - if that's the relevant variable then we should just be much more uncertain about our big picture macro views. I get what you're saying it's a weird place to end up but I think it's the correct place. \[to be really uncertain about specific policies\]
Reduce x-risk vs stagnation [https://youtu.be/EO5jJFpbJvg?t=4031](https://youtu.be/EO5jJFpbJvg?t=4031)
I think we should devote many more resources to limiting chances of nuclear war. My fear is this: once this stuff is invented, I tend to think it's gonna happen sooner or later, it's hard for me to imagine there's something we could do to postpone it like another 5000 years and indeed you could imagine weapons becoming more destructive and nuclear weapons looking kind of like a toy gun at some point maybe its biological weapons and it seems its just going to get worse.
If you just look at the world I don't think anyone is willing to devote 80% of GDP to making sure that one innocent person isn't convicted. People will kind of say that but I don't think they believe it.
## Elucidations podcast
This book is coming out of the tradition of social choice theory. How can we say - ever - that one outcome socially speaking is better than another. And one of the arguments is in a society where sustained economic growth is possible, if one society grows at a higher sustainable rate then another then after decades or centuries, then that society will be much better off for virtually everyone. And that's the best we can do to solve aggregation problems.
[...]
If you think of ethics as making sense within some sphere, within some background some set of suppositions, and one of those is simply that human beings exist you can then think that within that context certain rights are quite absolute, but if you need to engage in a mass rights violation just to save humans from going extinct that becomes permissable.
[...]
Deontology works better in small societies with no potential for economic growth.
There could be a Straussian argument for not talking to loudly about the exceptions because on average they are likely to be abused for public choice reasons.
[...]
Once you have a zero discount rate... most but not all people will produce the most social value by working and creating and being loyal to a free society in a wealthy economy. That will in turn do a lot to elevate poor individuals around the world. We've seen phenomenal catch-up growth in emerging economies over the last few decades. Catch-up growth is certainly more effective than everyone running off to poor countries to be a doctor. But nonetheless at the margin some people should run off and do public health work in Africa, Asia, wherever it may be. So I try to reframe that as a bit of a game theoretic problem, not everyone should run off and fight materia, but some people should. Think of it as a randomised Nash equilibrium, the people who can do it at lowest cost are those who should do it. Those are the people who more or less want to do it, who find it rewarding. The idea that they should do it and most of us shouldn't, that doesn't sound crazy, right. It's not this extreme obligation where you think utilitarianism is so inconsistent with common sense morality.
A zero discount rate, some economics and a dose of game theory considerably bridges the gap between consequentialism / utilitarianism and common sense morality. People were too distracted by the wonderful rhetoric of Bernard Williams. It's less of a problem than Williams thought.
Let's say you're a programmer you moved to Seattle, you work for Microsoft you earn 350k a year. And you're just "selfish". But you buy a lot of goods from China and South Korea eventually other countries - you're driving a phenomenal amount of economic growth.The biggest growth miracle we've ever seen has come from export orientation - poorer countries exporting goods to wealthy countries to mostly selfish consumers. a lot of foreign aid is not actually that effective I do believe we should have foreign aid, but the model of just being selfish and spending money on foreign goods very often drives more benevolence than anything else you can do. There is something cumulative about it that foreign aid often doesn't have. The foreign aid I'm most optimistic about is foreign aid that tends to have cumulative ongoing benefits so particular public health problems that lead to malnutrition and lower IQs which can set nations back for very long periods.
I do think at the margin that people should be more charitable. And I think it's important that people live their philosophies in some way. so I think that someone that Peter Singer is useful at the margin.
I'm a bit like Peter Singer but with a lot less guilt. But there is a stipulation in the book that people are in some way obliged to be very productive. It's a broad notion of productivity but saving and investing and working and trying to be creative - that's a strong obligation in my framework in a way that many people actually do find somewhat oppressive. But I am fine with that and completely willing to buy that bullet. So there are strong obligations in my book, they're just more puritan in some way.
[...]
I'm just up front about a framework I think basically everyone shares. I don't pretend to know the full content of the actually fully realised pluralist bundle. it just seems to me that ethics is complex that differences of perspective have persisted for so long between very well-meaning people for literally millennia I think it has to mean that there is the multiplicity of goods and that you know we should care about many of them. \[...\] You wanna look for what are the findable cases where a bunch of values we care about more or less co-move.
[...]
So a zero discount rate some economics and a dose of game theory considerably bridge the gap between
consequentialism, utilitarianism and common sense-morality. ... People were too distracted by the wonderful rhetoric of Bernard Williams. It's less of a problem than Williams thought.
The model of just being selfish and spending money on foreign goods very often drives more benevolence than anything else you can do. There's something cumulative about it which foreign aid often does not have.
Message: work save and invest more, be more creative if you can, create jobs for other people of you're in a position to do so. Work hard, be loyal to your friends, have and raise a family.
## Econtalk Stubborn attachments
[https://pca.st/bHUr](https://pca.st/bHUr)
Since I was a graduate student I've been interested in the normative foundations of economics and political judgments. And in this book I try to argue we can actually solve the biggest issues in judging what makes a political or economic order right, why do we prefer one economic policy over another. So, it's a very philosophical book. And, unlike a lot of philosophy, which tends to lead to a kind of a nihilism or extreme skepticism, in this I try to suggest we actually have all the answers.
[...]
I argue that if you systematically introduce the idea of sustainable economic growth into philosophy, welfare economics, social choice theory, that that allows you really to clear up a lot of different problems. [...] Basically if you have one economy with a rate of compound growth over time higher than that of another economy, over some number of decades one of those situations will just very clearly be better than the other for almost everyone. So, that's the starting point of the book. The chapters cover a lot more issues. But that's kind of my entry point into the stuff talked about by John Rawls, Robert Nozick, Derek Parfit and other people.
[...]
Russ Roberts: Why did you call this book Stubborn Attachments?
TC: The idea that we as humans have stubborn attachments to other people, to ideas and to schemes and to our own world and then trying to create a framework that can make sense of those and tell people it's rational and they ought to double down on their best stubborn attachments and that that's what makes life meaningful and creates this cornucopia in which moral and ethical philosophy can actually make sense and give us some answers.
Tyler Cowen People always think they’re more right on average than they are. This is true of everyone. If it’s true of everyone it has to be true of me, so I wanted to build a set of arguments that in some way were robust to me being wrong most of the time, and that’s hard to do. If you’re wrong most of the time, your arguments are wrong most of the time. But is there some meta-level where there’s a claim you can make that is taking that into account in some way.
## FT Alphaville
[https://ftalphaville.ft.com/2017/06/02/2189653/transcript-of-our-alphachat-with-tyler-cowen-about-stubborn-attachments/](https://ftalphaville.ft.com/2017/06/02/2189653/transcript-of-our-alphachat-with-tyler-cowen-about-stubborn-attachments/)
Tyler – accepting as a constraint on moral theory – what can people actually believe? "People can't believe that \[...\] it would crush the ice cream industry. I want a world where we still have ice cream, but where we can do more good for others than in the Peter Singer model".
How do we resolve disagreements, must we succumb to nihilism? In economics we have all these fancy constructs like the Arrow Impossibility Theorem – I think they're all overrated. I think there's a set of policy actions where if you look far enough into time some choices are way better than others and you can see that in a way that is obvious to pretty much everyone… but you need to look far enough into time.
Tyler – need rules to guide your behaviour otherwise you're having to calculate each and every time and you'll be crippled by your own epistemic uncertainty but if you have a series of rules that yield good outcomes, stick with those.
Common sense ethical views and utilitarianism are not that far apart, because if we did Singer stuff it'd reduce incentive to work so much that economic growth would be too slow.
You can think of a lot of the book \[Stubborn Attachments\] as applying the magic of compounding returns to ethics. Super simple, but academic philosophers tend not to do that.
Wealth is a buffer against tragedy.
Cowen – if there's something where at the margin it does not feel significant but if a lot of people did it it would add up to a lot, we should upgrade that in our calculations, and that gets us a bit away from complete agnosticism.
Stubborn – don't be complacent about growth.
After some point none of us are able to care about the distant future… I don't think it's actually a sign that you're reasonable or rational, even though it's the correct point of view… none of us are actually good enough to think that, so the way we get there is by having a kind of faith that the distant future matters.
\=> Faith as bridge between system 2 and system 1.
\=> Economic growth is faith based as an empirical matter deriving from how human beings really are – that is incapable of being good enough to really give a damn about 170 years from now, that's one of the key messages of the book.
You have to have doctrines we can believe in. The way you convince people is not always by giving them the facts, you have to make a faith based argument.
Yes to growth-enhancing redistribution. Put resources wherever they'll compound to create the most value.
Balance – don't model masochism. GDP (as wealth +) is the enjoyment.
## Dwarkesh Patel Lunar Society
[https://youtu.be/ayUZreGysTo](https://youtu.be/ayUZreGysTo)
- SA depends on time horizon -- not too long, not too short. If very long you stop caring about growth and just become very risk averse, you only care about safety.
- In the Stanford Talk, I estimated "in semi joking but also semi serious fashion, that we had 700 or 800 years left in us".
- "I am not a space optimist, I think the speed of light karma the difficulties of travel are really binding constraints, and maybe there will be vacations on the moon or something, but basically what we have to work with is earth."
- if you are a space optimist you may think that we can relax more about safety once we begin spreading to the stars. "you can get rid of that obsession with safety and replace it with an obsession with settling galaxies. but that also has a weirdness that I want to avoid, because that also means that something about the world we live in does not matter very much, you get trapped in this other kind of Pascal's wager, where it is just all about space and NASA and like fuck everyone else, right? And like if that is right it is right. but my intuition is that pascal's wager type arguments, they both don't apply and shouldn't apply here, that we need to use something that works for humans here on earth."
\- why do you think we only have 800 years? "Uh weapons of mass destruction. \[...\] If you let the clock tick out long enough, I don't think you have to believe that literally every human being will die, but just that civilisation will cease to exist.
\- in stubborn attachments your views end up aligning with common-sense morality a lot. Is there a deeper reason why common-sense morality is so often right about these issues or is it just a coincidence? "well common-sense morality evolved and I really wouldn't want to argue that it evolves to exactly the socially optimal point, but at some very gross level there is a kind of group selection. \[...\] If your philosophy totally contradicts common-sense morality, as I think Peter Singer's sometimes does, I think you should start worrying, that maybe it won't actually fare that well if you tried it."
- I think in the last 20-years \[looking at China\] the case for autocracy has gotten stronger.
- when he wrote complacent class, TC didn't realise that the big decline in newly created businesses is mostly concentrated in retail. And this seems less worrying.
- have you changed your mind about the Great stagnation thesis? "Not at all - so much of the great stagnation is about education health care and services -- non-tech, not internet services. And those are still stagnating."
\- TC classic advice for young people: first, get one really good mentor, ideally two or three. And second get a small group of really good friends that you love talking to. "Small group theory and mentors, that's my generic advice".
Why are mentors so important? "I think they only give you a few things but those things are so important. I think they give you a glimpse of what you can be, and you are oddly blind to that in the absence of those mentors, even if you are very very smart. So I think the rate of return to good mentors is just absolutely enormous. You don't need many, choose them wisely, more than one is okay, actually ignore most of what they say, but what you get from them will be so important. \[...\] I think usually it has nothing to do with them telling you you they might tell you whatever BS who knows, right. But it's what you see. \[...\] When I was young \[I think 14\] I met Walter Grinder and he had tried to just read as many books as possible. And just the notion that you could be a human and you could do that, I got from him that was a huge influence." couldn't we just get that kind of influence from watching you on YouTube? "Well, you tell me I think to some extent, but I think having flesh and blood mentors is very important, but again it's a portfolio approach, you want both and now you can get both."
\- TC is coathoring his book on talent with Daniel Gross. it will be a very practical book both for talent seekers and for people who want to be found. It is more about spotting and signalling talent than about developing it.
\- "I don't think we understand historical contingency very well."
\- Thinks chance of nuclear weapon killing someone within next 80 years is over 70%. Might be an accident or a skirmish that destroys a few cities, most likely a terror attack. But chance of all out nucelar war seems pretty low.
\- Thomas Schelling was TCs doctoral advisor. First person to ponder MAD scenarios in print.
\- TC thinks he started hitting diminishing returns to learning during his 30s
## Software engineering daily
Tyler as saying he wants to be "the most successful economist to use the internet as a platform to foment broad enlightenment."
TC Individuals should do a lot more of the stuff that companies do, including measurement, mission statement,vision, etc.
On IDW: I try to just do something through my own example and just avoid the negative much more than they do. a lot of what they complain about I agree with and I'm glad that somebody is doing it but I don't feel a sort of emotional kinship with them, but have I got to go out there and complain about the same things in a way I feel the opposite.
TC: I feel Alexa should be asking me about existentialism and not vice versa.
\[00:59:01\] TC: I feel Alexa should be asking me about existentialism, not vice versa.
\[00:59:05\] JM: Okay. To revisit this idea of reading people and absorbing the platform-agnostic sensibilities of people, as you are doing that, do you start to get these kind of like avatars of people in your head, and then when something new happens to you or you’re like at a restaurant, you kind of have these avatars of these people that you've studied in depth and you sort of can say like, “What would this person think about this Thai food or this new book that I'm reading?” Do you start to like really absorb these people's perspectives and have them play off of each other in your head?
\[00:59:41\] TC: I’m a big advocate of that, and for 30 years I’ve taught my students. I call it the Phantom Tyler Cowen. You want to have the Phantom Tyler Cowen sitting on your shoulder the rest of your life when you're thinking about economics or maybe a few other things, and then I found out maybe two years ago, like Peter Thiel and Mark Andreessen, they once had some kind of chat where they talked about exactly the same thing. But I think it's one of the best ways to learn is to develop these internal mental and emotional models of what other people would say with respect to what choices you're facing or what thoughts you’re having or what research paper you’re writing.
\[01:00:16\] JM: If we have phantoms, why do we need coaches?
\[01:00:19\] TC: Your phantoms are imperfect. Over time, the phantom becomes more like your twisted vision of what the person was and your phantom stops learning and your coach
understands you better all the time. So I'd say we need phantoms and coaches. We need
phantom coaches and coach phantoms.
\[01:00:35\] JM: In contrast of phantoms, I think you have somebody like Ben Thompson who is he’s self-referential. So in contrast with like phantom, boxing with phantoms, you can box with your past self. But I feel like that can become a deep rabbit hole. Have you tried to calibrate your degree of self-referentiality?
\[01:00:57\] TC: 1997. Tyler is one of my phantoms, and while I think I'm optimistic, he was much more optimistic. To me, he looks uncritically optimistic, but he just had the sense things really go great for the whole world, and in a way that I have, but in a lot of politics I feel I've seen a lot of worrying backsliding, and I have some major fears that I didn't have back then in the mid to late 90s.
## David Perell
It's high status to be modest, so it's hard to get ppl to talk about their production function.
Perell: most successful ppl have some kind of compounding internal advantage, just like a company.
TC: started early, econ and social science seriously from age 13/14 => high absolute number of years to improve. Most peers at his age have stopped really trying to self-improve. No years of poor health. Start early and continue late.
TC : on average I'm less intelligent than my peers. My intelligence is pretty high. I played chess when young and I was very good, but there were always ppl smarter than I was. Just to learn that at age 11/12 was very good, a lot of really smart people never learn it.
To figure out early on: I'm actually pretty smart, but not THAT smart. That's part of my secret. Ppl either have one or the other -- either discouraged or lazy/complacent.
Social norm of clearing your plate is a very bad norm in my view. When food is plentiful and being overweight isa isa big blem, big proble b You really want to learn to eat less of what's on your plate.
\[Ppl you pick as friends should also be continuously improving over time. This will mean your relationships do not atrophy.\]
Why Silicon Valley ppl so interesting: super smart but not that well educated, think outside the box, both thinkers and ppl who have had to do things, pass various "reality tests" and that makes them much smarter, so many academics are lacking in that. The only test they've ever faced often is "can I publish this piece?" and I think that's stunting.
TC plans a lot of open space in his days.
Just one commitment: write every day. Literally every day. Don't have to worry about how much, if you do it every day you'll get better and faster.
Spends a lot of time writing out exploring views disagree with. That might be another major production function advantage.
On writing style: "I am good at being either blunt and to the point or Straussian and complex which is clear in a very roundabout way but doesn't look clear to the uninitiated."
Doesn't outline, it seems to me like an excuse not to write. How do you know what you think until you write it?
Redrafts a lot, 10 times for typical book paragraph.
TC: I don't think of Peter Thiel as a tech person, I think of him as a humanities person with maybe the deepest understanding of the humanities that is out there now, of anyone. I think it's not an accident that Peter is fairly religious, that has embraced René Girard, I mean he has a degree in philosophy from Stanford and a law degree. Understanding the humanities side of what is going on in American society right now he to me has been thinker number one for some time. Thiel may have the best BS detector of anyone. Best selector of talent that America maybe ever, at least the last 50 years, has seen. For that you need a pretty deep understanding of the humanities. Bicultural bilingual is an underrated source of Thiel edge.
TC really is hyperlexic, taught himself to read aged 2. Really can prob read 5-10x faster than most peers. Ppl have stopped doubting that I understand what I read now that I do podcasting.
Underlying philosophy of MR = belief in excellence.
Collison: learning a new area quickly is his great strength, best TC has met at this.
Perell and Cowen go to jazz concerts at The Village Vanguard, NYC.
"Answering email is my business model."
TC uses Gmail via web browser on his iPad.
## Village global interview
Role of aesthetics in progress studies
Central!
Patrick Collison isn't about a set of views, he's about a way of thinking, which he calls fallibilism.
Gurri: when you can see political leaders in great detail on the internet they inevitably decline in status, too much scrutiny is not good for anyone's reputation. They're brought down to a lower level, we realise their mediocrity or they appear mediocre even when they're not.
On Glen Weyl: i Think simpler political rules tend to be better, they're mainly there to produce legitimacy and give people a stake in being involved, and something like quaratic voting is too complicate, and the fact that not even a local bridge club has seen fit to introduce quadratic voting … I don't think he takes that seriously enough, so I think he should be a bit more Burkean.
## Jason Crawford Torch of Progress
Fast Grants ~£20 million.
Thing I didn't understand at first: publicity will be bad for the program. Learning about the programme is part of the application process. That's the best filter we have, if it became too well known I don't think it would work that well anymore.
TC has recruited scouts. Scouts are currently obscure. But they will become more well known. They can make grants without needing my ok. Future of programme is scouts.
Small group very smart friends, find one or two really good mentors who will help you out and actually care about you. I think those things are at least 5x more important than they're made out to be. I think that obsessing about what school is a bit overrated. I think that's good advice for most ppl.
TC thinks that scientific tech on net is a big improvement on reducing risk?? Stupid humans far bigger risk than superintelligent machines. I'm 100x more worried about that.
Bostrom one of smartest people today. He developed and popularised existential risk. Ppp think they can make things more robust. I'm skeptical.
### PH suggested questions
1\. In an interview with David Perell, you said that the scarce resource of our time is motivation. Quoting you: "The real scarce input is the preacher, the moral leader, the inspirer, the mentor or the role model." What explains this scarcity? Why does noone want to be a preacher?
Context of discussion education. Less and less you need teachers to shove things down your throat. Your teachers don't actually have better information. What they can do is help you figure out what you care about, what you're good at, motivate you.
2\. In an interview with Rob Wiblin, you said that the probability that neither humans nor a successor species would exist in 100 years is extremely small, roughly equivalent to "whatever is the chance of some galactic catastrophe". This estimate is very different from that of Toby Ord, who in his recent book puts the odds of extinction this century, even assuming we somewhat get our act together on reducing risk, at 1/6. What explains the large difference here?
3\. You sometimes discuss the idea of a "common moral cone" or "moral sphere", which serves as an anchor for our comparisons, and presumably for our conception of progress. Could you elaborate on what you mean by that? Is this relativism? What are the best philosophical treatments of this idea? Does the concept of progress collapse, or at least become much less motivating, if you think on a time frame of thousands or millions of years?
4\. In 2018 you wrote a post called "The high-return activity of raising others' aspirations". What have you learnt about how to do this well? How risky is this?
### PH brainstorming on questions
#### Aspiration studies
- In an interview with David Perell, you said that the scarce resource of our time is motivation. Quoting you: "The real scarce input is the preacher, the moral leader, the inspirer, the mentor or the role model."
- What explains this scarcity? Why does noone want to be a preacher?
- Your TED talk expressed concern about good vs evil stories. But if motivation is scarce, do we perhaps need more such stories, artfully deployed?
- What advice, on the margin, would you have for a figure like Greta Thunberg, Jordan Peterson, Russell Brand, Will MacAskill?
- What makes a good exemplar?
- How should people choose their exemplars? How did you choose yours?
- What should people pay more attention to, if they are likely to become exemplars for others?
- JFK said "We Choose to go to the moon". Should we write similar speeches today?
- In 2018 you wrote a post called "The high-return activity of raising others' aspirations". What have you learnt about how to do this well? How to minimise downside risk?
- Do you think of raising aspirations as a central goal of the Progress Studies movement?
- TC: "At critical moments in time, you can raise the aspirations of other people significantly, especially when they are relatively young, simply by suggesting they do something better or more ambitious than what they might have in mind. It costs you relatively little to do this, but the benefit to them, and to the broader world, may be enormous. This is in fact one of the most valuable things you can do with your time and with your life."
- What have you learnt about how to do this well?
- What mistakes do people make when trying to do this?
- What are the risks or downsides of raising aspirations?
- One side-effect of raised aspirations might be increased levels of frustration, resentment and envy. Not everyone can be Elon Musk.
- You sometimes discuss the idea of a "common moral cone" or "moral sphere", which serves as an anchor for our comparisons, and presumably for our conception of progress.
- Could you elaborate on what you mean by that? Is this relativism? What are the best philosophical treatments of this idea?
- Does the concept of progress collapse, or at least become much less motivating, if you think on a time frame of thousands or millions of years? (Because whatever grounds our moral sphere now will be radically transformed on such a timescale).
- You've often mentioned that you don't know how to think about animals. How do we compare 1 million happy insects to a human having a migraine, etc. Have you made any progress on your thinking on this issue recently?
- How big is the future? What is the probability that humans or our descendants will be around in 100 thousand, 1 million or 100 million years? How does your view on the size of our future affect your view of the priorities for Progress Studies?
#### Catastrophic and existential risk
- A lot of people, including a lot of young people, are worried that our current trajectory is unsustainable – that if we continue making "progress" along a broadly "business as usual" trajectory, the probability of a catastrophic or existential disaster trends towards 1. What do you say to that?
- In an interview with Rob Wiblin, you said that the probability that neither humans nor a successor species would exist in 100 years is extremely small, roughly equivalent to "whatever is the chance of some galactic catastrophe". This estimate is very different from that of Toby Ord, who in his recent book puts the odds of extinction this century, even assuming we somewhat get our act together, at 1/6 \[1\]. What explains the large difference here?
- If the truth is closer to Toby's numbers, what would that imply for the priorities of the Progress Studies community?
- What would an ideal preacher for catastrophic and existential risks look like? Would they preach to a broad, or a narrow, congregation?
- On the Bostromian world view, the order in which we develop new capabilities really matters, and we should strive to acquire safety promoting capabilities before we acquire the most dangerous capabilities. What do you make of this prescription? Do you have thoughts on how we should operationalise it?
- What do you make of Bostrom's claim in the Vulnerable World Hypothesis paper that we probably need to shift to very different world order if we're to successfully manage the risks associated with greater technological capabilities?
- What are the most important things we could plausibly have done in the past 2 decades to mitigate the impacts of a pandemic like COVID-19? What does this imply for what we should do more or less of in the next 2 decades?
\[1\] Ord's numbers, from "The Precipice":

#### Notes from David Deutsch - Rees debate that might be grounds for interesting questions
[https://www.thersa.org/events/2015/10/optimism-knowledge-and-the-future-of-enlightenment](https://www.thersa.org/events/2015/10/optimism-knowledge-and-the-future-of-enlightenment)
Deutsch
- "I think civilization is currently burdened by a debilitating pessimism. Not just prophecies of Doom, because they have always existed, but something deeper. The term technological fix has become as pejorative as luddite used to be. The desire for technological solutions is now widely regarded as naive."
- In The Beginning of Infnity, DD wrote: "Pessimism has been endemic in almost every society throughout history."
- "Sagan speculated that if the ancient Athenian society had not collapsed (probably in large part due to a severe plague) we might now be spreading through the solar system."
Matthew Taylor (RSA chief, former top advisor to Blair)
- In certain key areas we know less than we did 50 years ago. Political leaders know less about how to lead in the circumstances they face than their peers did 40-50 years ago. Because the world has become more complex, because populations have become more diverse, because we are less deferential, our political leaders are much more at sea, much less confident of their knowledge of how to drive change in societies. We must factor into this debate that some forms of technological progress make some forms of knowledge go backwards.
## Atlantic Progress Studies article
[https://www.theatlantic.com/science/archive/2019/07/we-need-new-science-progress/594946/](https://www.theatlantic.com/science/archive/2019/07/we-need-new-science-progress/594946/)
Progress itself is understudied. By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. For a number of reasons, there is no broad-based intellectual movement focused on understanding the dynamics of progress, or targeting the deeper goal of speeding it up. We believe that it deserves a dedicated field of study. We suggest inaugurating the discipline of “Progress Studies.”
When we consider other major determinants of progress, we see insufficient engagement with the central questions. For example, there’s a growing body of evidence suggesting that management practices determine a great deal of the difference in performance between organizations. One recent study found that a particular intervention—teaching better management practices to firms in Italy—improved productivity by 49 percent over 15 years when compared with peer firms that didn’t receive the training. How widely does this apply, and can it be repeated? Economists have been learning that firm productivity commonly varies within a given sector by a factor of two or three, which implies that a priority in management science and organizational psychology should be understanding the drivers of these differences. In a related vein, we’re coming to appreciate more and more that organizations with higher levels of trust can delegate authority more effectively, thereby boosting their responsiveness and ability to handle problems. Organizations as varied as Y Combinator, MIT’s Radiation Lab, and ARPA have astonishing track records in catalyzing progress far beyond their confines. While research exists on all of these fronts, we’re underinvesting considerably. These examples collectively indicate that one of our highest priorities should be figuring out interventions that increase the efficacy, productivity, and innovative capacity of human organizations.
Similarly, while science generates much of our prosperity, scientists and researchers themselves do not sufficiently obsess over how it should be organized. In a recent paper, Pierre Azoulay and co-authors concluded that Howard Hughes Medical Institute’s long-term grants to high-potential scientists made those scientists 96 percent more likely to produce breakthrough work. If this finding is borne out, it suggests that present funding mechanisms are likely to be far from optimal, in part because they do not focus enough on research autonomy and risk taking.
More broadly, demographics and institutional momentum have caused enormous but invisible changes in the way we support science. For example, the National Institutes of Health (the largest science-funding body in the U.S.) reports that, in 1980, it gave 12 times more funding to early-career scientists (under 40) than it did to later-career scientists (over 50). Today, that has flipped: Five times more money now goes to scientists of age 50 or older. Is this skew toward funding older scientists an improvement? If not, how should science funding be allocated? We might also wonder: Do prizes matter? Or fellowships, or sabbaticals? Should other countries organize their scientific bodies along the lines of those in the U.S. or pursue deliberate variation? Despite the importance of the issues, critical evaluation of how science is practiced and funded is in short supply, for perhaps unsurprising reasons. Doing so would be an important part of Progress Studies.
Progress Studies has antecedents, both within fields and institutions. The economics of innovation is a critical topic and should assume a much larger place within economics. The Center for Science and the Imagination at Arizona State University seeks to encourage optimistic thinking about the future through fiction and narrative: It observes, almost certainly correctly, that imagination and ambition themselves play a large role. Graham Allison and Niall Ferguson have called for an “applied history” movement, to better draw lessons from history and apply them to real-world problems, including through the advising of political leaders. Ideas and institutions like these could be more effective if part of an explicit, broader movement.
An important distinction between our proposed Progress Studies and a lot of existing scholarship is that mere comprehension is not the goal. When anthropologists look at scientists, they’re trying to understand the species. But when viewed through the lens of Progress Studies, the implicit question is how scientists (or funders or evaluators of scientists) should be acting. The success of Progress Studies will come from its ability to identify effective progress-increasing interventions and the extent to which they are adopted by universities, funding agencies, philanthropists, entrepreneurs, policy makers, and other institutions. In that sense, Progress Studies is closer to medicine than biology: The goal is to treat, not merely to understand.
If we look to history, the organization of intellectual fields, as generally recognized realms of effort and funding, has mattered a great deal. Areas of study have expanded greatly since the early European universities were formed to advance theological thinking. Organized study of philosophy and the natural sciences later spawned deeper examination of—to name a few—mathematics, physics, chemistry, biology, and economics. Each discipline, in turn with its subfields, has spawned many subsequent transformative discoveries. Our point, quite simply, is that this process has yet to reach a natural end, and that a more focused, explicit study of progress itself should be one of the next steps.
## Rob Wiblin 80K
[https://80000hours.org/podcast/episodes/tyler-cowen-stubborn-attachments/](https://80000hours.org/podcast/episodes/tyler-cowen-stubborn-attachments/)
Nuclear weapons to me are always the number 1 issue. But that said even of you sat down and said I'm gonna limit nuclear war, I don't know what that means operationally. If you're a president or a parliament or maybe if you head a particular nonprofit… \[...\] whereas to boost the rate of economic growth there is plenty that most people can do in that direction. So I wish we had more good avenues for reducing the risk of nuclear war, I'd be keen to hear about them, we'd actually be keen to support them with emergent ventures.
It still seems to me that education is a net positive for coordinating people and limiting their desire to slaughter each other. I understand it’s not always the case — a lot of the Nazis were well educated and so on. But still, on net, I think it’s a positive force.
Growth and education tend to come together. If we’re growing more, we can afford more education, we can do more to support education in poorer countries. So I still think economic growth is at least a partial, indirect means to some of those ends. Again, it’s something that’s easy to concretize. You can, to some extent, measure it. You know when you’re failing. And that makes it more useful than some other kinds of advice that maybe I still would truly fully support.
But see, I see it the other way around. If you look at data on economic growth, you see huge productivity improvements: China, India, basically free-riding on existing technologies, not usually making them better. It’s just managing companies better, having better incentives in companies.
If the world economy grew 4-point-whatever percent last year, way more of that, say, 4.8 percent, is coming from better management, better institutions than is coming from new technology. Maybe 1 percent of it is coming from new technology and the rest from better management — in some cases, growing population, capital resources.
So institutions are way out-racing technology right now. Again, I’m not taking that for granted, but I think people would be much more optimistic if they viewed it in that light.
But the idea of there being this general public movement where you get people to do the right thing by scaring them, I think that's the opposite of how politics usually works. Voters like to live in denial, and if you scare people too much with, say, climate change, they respond by thinking it’s not actually all that significant. I think some kind of more positive vision — you’re more likely to get people on the sustainability bandwagon.
That’s one of the backstories to my book: I’m trying to give a positive vision, emphasizing less scaring the heck out of people and more, “Here are the glories at the end of the road, what you can do for your descendants and world history.” Scaring people seems to backfire in politics.
Tyler Cowen: I worry a great deal about surveillance, which, of course, has proceeded most rapidly in China. If surveillance really would make us safer, that would be an argument for it. But surveillance tends to corrupt your rulers, and it tends to increase the returns to being in charge. I think, over time, it increases the chances of, say, a coup d’état or political instability in China.
Even though you have more stability at the ground level, you may have less stability at the top. I think this is one of the two or three biggest issues facing the world right now: What are we going to do with surveillance and AI, facial and gait recognition? I don’t think we know what to do. I would say I more worry about it than applaud it.
Robert Wiblin: I think I’m with you. I’m not sure whether more surveillance or less surveillance is better right now. But it seems like finding better ways to govern surveillance, given that we’re probably going to have quite a lot of it, so that it doesn’t lead to these negative political outcomes, could be an extremely important research question that more think tanks should be looking into.
Robert Wiblin: Okay, let’s move on to some other things in the book that I wasn’t entirely convinced by. You make the argument in one of the chapters that, even though our actions seem to have very large and morally significant effects in the long run, that doesn’t necessarily mean that we have incredibly onerous moral duties. We don’t necessarily have to set aside all of our projects in order to maximize the growth rate of GDP or improve civilizational stability. What’s your case, there?
Tyler Cowen: Well, I do think you have an obligation to act in accordance with maximizing the growth rate of GDP, but given how human beings are built, that’s mostly going to involve leading a pretty selfish life: trying to earn more, having a family, raising your children well. It’s close to in sync with common-sense morality, which to me is a plus of my argument. What it’s telling you to do doesn’t sound so crazy.
You don’t have to re-engineer human nature. So if someone from more of a Peter Singer direction says, “Well, all the doctors have to run off to Africa,” people won’t do that. We can’t and shouldn’t coerce them into doing that.
The notion that, by living a “good life” but making some improvements at the margin, that that’s what you’re obliged to do, I find that very appealing. It’s like, “Change at the margin, small steps toward a much better world.” That’s the subheader on Marginal Revolution. It’s also a more saleable vision, but I think that it accords with longstanding moral intuitions, shows it’s on the right track.
Robert Wiblin: Yeah, okay. It seems like, given your framework of long-termism, the moral consequences of our actions are much larger than what most people think when they’re only thinking about the short-term effects of their actions. In that sense, the moral consequences should bear on us more than they otherwise do.
Tyler Cowen: It’s very tricky, though. If you go around telling people, “Everything you do is going to change the whole world,” they’re going to get pissed off at you. They’re going to tune you out, so there’s a Straussian undercurrent in the book. The long term is really important, but people still need to focus to some extent on the short term to get to the long term. They can only handle so much computationally.
It’s not that I think the right answer is for everyone to be so attuned to the exact correct moral theory. They’re going to use rules of thumb. We’re going to rely on common-sense morality whether we like it or not — even professional philosophers will, and that’s okay, is one thing I’m saying. Just always seek some improvement at the margin.
If you’re trying to find what’s the intuition you should be least skeptical about, I would say it’s lives that are much richer or happier and full of these plural values to an extreme degree compared to other lives. Even there, we can’t be sure, but that seems a kind of ground rock. If you won’t accept that, I don’t know how there’s any discourse.
Robert Wiblin: In this book, the influence of the philosopher Derek Parfit is clearly really vast. What do you think Parfit was most wrong about, and what do you think he was most right about that’s unappreciated today?
Tyler Cowen: Not too long before he died, Parfit gave a talk. I think it’s still on YouTube. I think it was at Oxford. It was on effective altruism. He spoke maybe for 90 minutes, and he never once mentioned economic growth, never talked about gains in emerging economies, never mentioned China.
I’m not sure he said anything in the talk that was wrong, but that omission strikes me as so badly wrong that the whole talk was misleading. It was all about redistribution, which I think has a role, but economic growth is much better when you can get it. So, not knowing enough about some of the social sciences and seeing the import of growth is where he was most wrong.
Nick Bostrom has an engineering mentality to his work that Parfit never did. Like, “What can we do? What should we do? How do we apply resources?” Maybe it’s the next step, but who is the next Socrates? We will see. Probably it will be someone from quite an unexpected corner, perhaps.
I would also mention two influential figures: Nozick and Rawls. Rawls influenced a lot of people, but when you read Rawls on growth and the future, it’s incoherent. Rawls is afraid of economic growth. At times, he seems to endorse a stationary state because any savings makes the first generation worse off, and they’re the least well-off people. That to me is a reductio on Rawls’s argument, the entire argument.
Robert Wiblin: To me, also.
Tyler Cowen: If the pessimistic scenario is correct. History is cyclical, we’re going to undergo some kind of retrograde process. There will be some future — we’re not all going to die, but the amount of value in that future is not high enough for the option of continued growth through the future to be the dominant one deciding what it is we should do. There’s a pretty good chance that’s correct and I’m wrong.
Robert Wiblin: Let’s work to make that false. What would Tyrone have to say about the book?
Tyler Cowen: Well, I think Tyrone would endorse the pessimistic view, that the future is not so grand and glorious. It doesn’t have the moral power I attribute to it, and that we just ought to have more of a kind of Nietzschean scramble for the here and now, and there is no final adjudicator of these clashing values.
Morality becomes not so much deontological, but for Tyrone, it would become relativistic and almost nihilistic. That’s what Tyrone said to me about this book. He bugs me all the time. I try to shut him up, but I can’t do it.
Robert Wiblin: Choosing to pass on questions in overrated and underrated.
Tyler Cowen: More people should answer underrated, overrated rather than pass because you’re not expected to give a final answer. It’s understood it’s all about Bayesian updating and what small piece of wisdom can you bring to bear on what others know. So the idea that you can’t give some perfect answer, or you only want to talk about your specialty, I think that’s a cop out. No on in real life behaves that way.
Just because you have this artificial distinction between academic knowledge and practical knowledge that you don’t want to say something for fear you’ll look bad, I don’t see why. Give it a shot. What’s the harm?
Robert Wiblin: I guess, yeah. I find overrated and underrated somewhat frustrating at times because I feel like most things are just appropriately rated. The market is generally right. But people very rarely defer to that. Guests almost never say that things are appropriately rated. Do you think we should at least say appropriately rated more often?
Tyler Cowen: Once you consider diversity of opinion, arguably, nothing is appropriately rated, right? Someone rates it appropriately.
Robert Wiblin: It depends whether you’re asking an objective or subjective question. It’s ambiguous.
Tyler Cowen: Right. But you always have the option of threading out who overrates and underrates it. And I try to do that in some of my answers. Some things . . . unemployment insurance, is that appropriately rated? Maybe that is. But for the most part, there’s more you can say.
Robert Wiblin: Okay. Let’s push on to some questions that aren’t about the book. What are one or two of the most important things that individuals could do to raise economic growth, in your view? Listeners, especially.
Tyler Cowen: I think most people are actually pretty good at knowing their weaknesses. They’re often not very good at knowing their talents and strengths. And I include highly successful people. You ask them to account for their success, and they’ll resort to a bunch of cliches, which are probably true, but not really getting at exactly what they are good at.
If I ask you, “Robert Wiblin, what exactly are you good at?” I suspect your answer isn’t good enough. So just figuring that out and investing more in friends, support network, peers who can help you realize that vision, people still don’t do enough of that.
Robert Wiblin: Speaking of Tetlock, are there any really important questions in economics or social science that . . . What would be your top three questions that you’d love to see get more attention?
Tyler Cowen: Well, what’s the single question is hard to say. But in general, the role of what is sometimes called culture. What is culture? How does environment matter? I’m sure you know the twin studies where you have identical twins separated at birth, and they grow up in two separate environments and they seem to turn out more or less the same. That’s suggesting some kinds of environmental differences don’t matter.
But then if you simply look at different countries, people who grow up, say, in Croatia compared to people who grow up in Sweden — they have quite different norms, attitudes, practices. So when you’re controlling the environment that much, surrounding culture matters a great deal. So what are the margins where it matters and doesn’t? What are the mechanisms? That, to me, is one important question.
A question that will become increasingly important is why do face-to-face interactions matter? Why don’t we only interact with people online? Teach them online, have them work for us online. Seems that doesn’t work. You need to meet people.
But what is it? Is it the ability to kind of look them square in the eye in meatspace? Is it that you have your peripheral vision picking up other things they do? Is it that subconsciously somehow you’re smelling them or taking in some other kind of input?
What’s really special about face-to-face? How can we measure it? How can we try to recreate that through AR or VR? I think that’s a big frontier question right now. It’d help us boost productivity a lot.
Those would be two examples of issues I think about.
How to make philanthropy better. Everyone says measure results. I think it’s become a cliché. It’s trivial. It’s begging the question. What is it you’re measuring? What counts as good? What counts as bad?
Might it not be the case, a lot of initiatives will do better by not trying to measure results and do better getting outliers rather than homogenizing . . . with everyone running after the same easy-to-measure kinds of results. So okay, if it’s not measure results, what is it then? We need more philanthropic experiments and to think about them critically.
Robert Wiblin: How likely is it the case that the most important thing to track is not the global economic growth rate, but rather the relative growth rate of countries that have the best moral values versus those that have relatively worse moral values? That it’s a more Manichean vision of a future where the right people have to have power and influence.
Tyler Cowen: It’s a very good question. I don’t think we can say which countries have the best moral values. A lot of people will tell you that’s the United States, but our longer-run history is a pretty brutal one, and we’ve treated a lot of disadvantaged groups very badly.
If you look at a lot of smaller countries, they’ve done a lot less wrong-doing, but they also were not in a position to. So to glorify them as the model to copy, I think is begging the question because if they had more power, well, what would they have done? We don’t know.
I think of it more in terms of strands or tendencies we want to encourage in all the countries. The so-called bad, evil countries — so many people can be so good or good at heart or maybe even partly because they’re in a bad country, like in the former Soviet Union — bonds of friendship often were stronger.
So I think we need to unpack the whole notion of the better and worse countries and mostly be a lot more self-critical about our own country, whichever one that may be.
Robert Wiblin: Do you think that people overstate or understate the differences both morally and otherwise between people in different countries?
Tyler Cowen: I think they overstate the differences. Most people are selfish in a wide variety of situations. And their environments change — they can be much better or much worse. But to really say these are the bad people, I’m pretty reluctant to do that
Maybe the fundamental, and indeed, insoluble problem of philosophy is how to integrate the claims of nature with the claims of culture. They’re such separate spheres, but they interact all the time.
The final appendix B of my book, I talk about this problem. How do you weight the interests of humans versus animals or creatures that have very little to do with human beings. And I think there’s no answer to that. The moral arguments of Stubborn Attachments — they’re all within a cone of sustainable growth for some set of beings. And comparing across beings, I don’t think anyone has good moral theories for that.
Robert Wiblin: But it seems like on your view, you should think that, while we don’t know what the correct moral trade-off is between humans and animals, there is a correct moral trade-off. It’s just very hard to figure out what it is.
Tyler Cowen: I’m not sure what we would make reference to to make that trade-off. There’s some intuitionism, like gratuitous cruelty to animals — even not very intelligent ones — people seem to think is bad. That’s easy enough to buy into.
Robert Wiblin: But you support interpersonal aggregation across humans. Then it just seems like there should be a similar principle — though more difficult to apply in practice — that would apply to a chimpanzee and a human?
Tyler Cowen: We’re very far from knowing what is. But chimpanzees are pretty close to humans. That strikes me as quite possible. But if you’re talking about bees and humans . . . What if another billion bees can exist, but one human has to have ongoing problems with migraine headaches? My best guess is we will never have a way of really solving that question using ethics.
Robert Wiblin: Yeah. I agree that the practical problem gets very severe when you’re comparing humans and insects. But I think, in principle, the solution follows the same kind of process as when you’re comparing humans and other humans and chimps.
Tyler Cowen: I’m not sure the practical problem is different from the conceptual problem. I think it’s a conceptual problem, not a practical one. We could hook up all the measurements to those bees we want, and at the end of the day, whether a billion of them is worth a migraine headache for a human . . .
Robert Wiblin: But you say you’re a moral realist. Shouldn’t there be an answer then?
Tyler Cowen: I don’t think there’s an answer to every question under moral realism.
Tyler Cowen: I don’t think Parfit, in naming it the repugnant conclusion, was himself begging the question. He was trying to draw people’s attention to it with a vivid word. I think he fully well understood it might not be repugnant.
But if you think about a life in the repugnant conclusion, well, you’re alive for a few minutes, someone feeds you a potato, you hear some Muzak, and you pass away. Well, isn’t that better than nothing? In my view, those are not human lives as we understand the terms, even if they look like humanoid beings.
So it’s getting back to the question of comparing a billion bees to one person having a migraine headache, and I just don’t think we can do it. That moral realism can’t handle utility comparisons across very different kinds of beings.
Robert Wiblin: Yeah. I feel like a weakness of the repugnant conclusion kind of thought experiment is that it ties together multiple issues. One thing that people don’t like about it is the blandness of it, the stability of the welfare that people have in that world.
If you imagine where we could be in a repugnant-conclusion kind of world, where humans — we have ups and downs. On net though, our lives may be only weakly positive, but people don’t say it’s terrible that there’s more people that have lives of the kind that we do or not desirable for civilization to continue just because our lives could be much better in principle.
We could imagine beings that have 100 times the welfare that we do over their lives. By comparison to them, this world is kind of the repugnant conclusion, basically.
Tyler Cowen: That’s right. We have intuitions across a lot of different features of the utility distribution, and some of them, I suspect, are implanted in us in a false way and not really valid for moral theory.
Robert Wiblin: All right. If you want to finish with perhaps one stirring message for people to go out and improve the long-term future, what would you say?
Tyler Cowen: Don’t look for stirring messages. Think about things more than once. Thank you for listening to the podcast. And if I have a stirring message, it’s continue to follow the career of Robert Wiblin. I’m very grateful to him for all the time and energy he’s put into being the interviewer in Conversations with Tyler.
Robert Wiblin: If you’d like to learn more about the topics in the book, can I suggest checking out Nick Bostrom’s paper, Astronomical Waste, or Nick Beckstead’s On the overwhelming importance of shaping the far future.
So every single thing you do, including our discussion, remixes the future course of world history. If you’re a consequentialist, you need to take that seriously. You need to ask, “Does this simply make my entire doctrine incoherent?”
The stance I take in the book is, if you’re pursuing this truly large significant grand goal of making the future much, much better off in expected value terms, that will stand above the froth of the uncertainty you create by remixing things with every particular decision.
Robert Wiblin: This reminded me pretty strongly of the nonidentity problem from Derek Parfit, who you once wrote a paper with about discounting rates. Do you want to describe the nonidentity problem?
Tyler Cowen: Derek Parfit, in his 1984 book, Reasons and Persons, had an example that, say, you would bury nuclear waste, and several generations from now, the waste would, say, kill millions of people.
But the fact that you buried the waste would change the timings of subsequent conceptions. So the people who are being killed a million or, say, a thousand years from now, they wouldn’t have been born otherwise unless you had buried the waste.
You could argue, “Well, I haven’t harmed anyone at all. By burying the waste, I caused them to be born.” They die of a terrible cancer when they’re 27 years old, but on net, this is still following the Pareto principle.
I think I have an argument why that’s wrong, namely the case where you don’t commit the very harmful act. You might have different identities of people, but you’ll have a much greater aggregate of good in the more distant future. And it’s not about individual identities. So there’s something a little oddly collectivist about my argument, you might say.
Robert Wiblin: I guess many people have something like this intuition that, yeah, if you change the identities of people in the future, such that you can’t see any correspondence between them in the two different scenarios, then perhaps it doesn’t matter exactly what you do because there’s no significant person you can identify who’s worse off. Do you think this is a very strong counterargument to those views?
Tyler Cowen: Here’s a tension I think that we all have to face up to. Parfit talks about something called the person-affecting principle. How does your action affect some particular person?
But if you’re willing to make aggregate judgments and engage in an active aggregation, saying some kinds of societies are better than others or some policies are better than others, there’s something in the micro foundations of that judgment that’s fairly nonindividualistic.
People want to be consequentialists, and they want to be pure individualists sometimes. It’s not actually a fully happy marriage of views. And the notion that, once you jam together different measurements of well-being, you’re making a collective judgment about the overall course of history even slightly Hegelian. You could also think of this book as a Hegelian defense of liberty.
Robert Wiblin: I think I slightly messed up the explanation of the person-affecting issue there. Because often, what’s going on is people want to say, “If you change the number of people in the future, that can’t necessarily be good or bad because there’s no specific individual in both cases who is worse off or better off. The people who exist in one scenario are not in the other. They’re only there in one case, so you can’t say that they have a higher welfare than a specific person in the other scenario.”
But then when you point out that, “Well, almost everything you do is going to result in there being no correspondence between the list of people in one scenario and the list of people in the other scenario. So it doesn’t matter whether you bury this nuclear waste that’s going to greatly harm people in the future in one case.”
People tend not to like that conclusion. That just seems very counterintuitive and wasn’t really what they were aiming for. So you’re going to take the total view here, where we just sum up the consequences of our actions on lots of different people.
Tyler Cowen: Subject to rights constraints. Yes.
Robert Wiblin: Do you think there’s any plausible alternative to doing that?
Tyler Cowen: I think the most plausible alternative to my view is simply to say the actual time horizon is not very long, that maybe in an extreme case, either the world will end soon or history will start collapsing and run in reverse. So there is no grand, glorious future that has a heavy weight in the calculation. And thus, we’re always dealing with the here and now, a quite pessimistic view.
I think that’s the main rival view to what I put out in the book. I don’t feel I refute that argument. It’s going to be true with some probability, right? If you do an expected value calculation, well, retrogression is true with, say, probability 37 percent, progress with 63 percent. In the expected value calculation, progress is still going to win. It will have the dominant weight. But we need to be very careful. Don’t assume progress is possible.
And the other pessimistic theory of history as, say, a lot of the ancient Greeks would have accepted, may well be true. And if that’s true, then this vision is a kind of large mistake. But you cannot live with pessimism, right? There’s also a notion that more optimism is a partially self-fulfilling prophecy. Believing pessimistic views might make them more likely to come about.
Robert Wiblin: Yeah. In the seminar room, it seems like economists and sometimes philosophers are not willing to aggregate welfare across people. If one person’s worse off then someone else, you just won’t be able to say whether, overall, the situation is better. But then it seems like in everyday life, when they’re making calls about what should a group of people do, they’re always willing to aggregate.
Tyler Cowen: Absolutely.
Robert Wiblin: That’s the immediate argument that they’ll always turn to. What do you think’s going on there?
Tyler Cowen: We’re always willing to choose a restaurant, right? Even if it’s not the first choice of all people. And the aggregations we make when looking at economic growth are often quite large differences in income. But I think there’s a fear amongst a lot of philosophers that if you’re too willing to aggregate, utilitarianism becomes too strong and they don’t like all of the consequences of that decision.
So they try to draw the line at aggregation. But as you mentioned, I think that’s grossly inconsistent with how we treat instrumental reason in our lives, in our businesses, in our nonprofits. We aggregate all the time. We’re wrong a lot, but the judgments are not completely outside the bands of reason, either.
Robert Wiblin: Something that’s even stranger about that argument, to me, is that, if one person is made better off in a scenario and someone else is made worse off, it’s not the case that it’s forbidden to do the thing where one person was made worse off. It simply makes it incommensurable with the other scenario.
So you simply can’t say whether it’s better or worse, I think, on this view. It doesn’t lead to the conclusion that I think people want, which is that you should be unwilling to harm one person to benefit a large number of people elsewhere. You simply can’t say whether it’s better or worse, and so all bets are off, and indeed, it’s, in a sense, permissible. Has that argument occurred to you?
Tyler Cowen: Sure. It’s always instructive to look at how people behave as parents or maybe how they vote in a department when they’re dealing with their colleagues. And they’re some form of consequentialist in all those cases. If you take the intuitions they’re using in these smaller decisions and just build them up onto a larger scale, I think the logic of consequentialism is very, very hard to escape.
And when people say, “Oh, I’m a deontologist. Kant is my lodestar in ethics,” I don’t know how they ever make decisions at the margin based on that. It seems to me quite incoherent that deontology just says there’s a bunch of things you can’t do or maybe some things you’re obliged to do. But when it’s about more or less — “Well, how much money should we spend on the police force?”
Try to get a Kantian to have a coherent framework for answering that question, other than saying, “Crime is wrong,” or “You’re obliged to come to the help of victims of crime.” It can’t be done.
Robert Wiblin: Yeah, it’s a somewhat boring moral vision where we’re just prohibited from doing a bunch of stuff, and then doesn’t really have much more to say.
Tyler Cowen: That’s right.
Robert Wiblin: Let’s move on a bit from the stuff where we see things basically the same to some areas where we have a somewhat different view.
You say early on in the book, “If you’re the kind of reader that I want, you’ll feel I have not pushed hard enough on the tough questions, no matter how hard I push.” So I’m going to try to push you here and take that to heart.
Tyler Cowen: Great.
Robert Wiblin: In the book, and, I guess, here so far, you’ve been focusing overwhelmingly on the importance of increasing economic growth, kind of getting to a better future faster. When we’re talking about growth here, we might imagine time on the X axis and welfare being generated in the universe on the Y axis, and you want to increase that faster.
Why focus on increasing the rate rather than making sure that that doesn’t go to zero?
Tyler Cowen: Well, keep in mind the core recipe is the rate of sustainable economic growth. If it’s going to go to zero, you’re knocked out of the box. So you’re maximizing across both of those dimensions, and I think, empirically, there are a large class of cases where more growth and more stability come together.
National defense is the easiest way to see that. If your society stays poor, someone will take you over. And those who take you over are probably nasty and will harm you. It’s not the only way in which growth and sustainability come together. But at most margins, they do. So there’s a wide enough class of cases where we can do both things at the same time.
I would note that earlier versions of this book — you know, I worked on this for about 20 years — the earlier versions had much, much more on existential risk, and it took me years to cut those out. I never repudiated any of the ideas. They just came up in enough other books. I felt I wanted to stick to my core notion of growth more than existential risk and stability.
Robert Wiblin: Okay, that’s answering some of my questions. Because in the book, you write, “Policies that prioritize growth at breakneck speed are frequently stable. The average civilization endured only 400 years, and this number appears to be declining. Our path in the future requires a tightrope act, balancing progress and stability along the way. And we should believe that the end of the world is a terrible event, even if that collapse comes in the very distant future. Similarly, the continual persistence of civilization 300 years from now is much better than having no further civilization at that time.”
But then so much of the book is dedicated to economic policy and how would we increase growth rather than focusing on this other word, sustainability — what are the biggest threats to sustainability in the future? And you’re just saying it’s been done elsewhere, so you wanted to focus on the growth.
Tyler Cowen: Richard Posner was one of the first books on this. When Posner’s book came out, I immediately started doing a lot of editing on mine. You and many other people in the effective altruism movement have written on existential risk, and I endorse most of that. But just at the margin, it seemed to me growth was underestimated.
I think that one of the main, if not quite existential risks — but it’s a risk to ongoing growth — is environmental issues. And I think there’s plenty we could do for the environment.
Tyler Cowen:… There’s just environmental issues and I think there’s plenty that we can do for the environment that also boosts growth. So cutting done on air pollution has made people healthier, more productive, easier to live in cities. As China cuts down on air pollution, say, in Beijing, it will make Chinese society more productive.
It would be more of a problem for the argument if you thought growth and stability were always at loggerheads. But there are large numbers of societies that collapse because they don’t grow enough. They can’t fend off, say, drought or weather problems or problems in their agriculture in world history, or they’re conquered by someone else.
Robert Wiblin: Do you think that still applies today?
Tyler Cowen: If the Unites States stopped growing, I feel a lot of free countries in the world would collapse or be taken over, or they would become unfree. If we grow at a very low rate, our budget will explode It will cut back on our discretionary spending, our ability to advance science to protect the world against an asteroid coming. So, yes I absolutely think it applies today.
Robert Wiblin: I think I agree that if the US stops growing that would be very bad, principally because of the cultural and political effects that that would have and perhaps that we’ve started to see over the last five years.
But doesn’t that suggest that we want a sufficiently high level of growth? One that keeps people happy and looking forward to the future and being willing to accept some negative shocks because they know that things are going to get better in the future anyway? And that we don’t necessarily have to go from 4 percent GDP growth to 8 percent GDP growth — that’s not necessarily going to make things more stable.
Tyler Cowen: You’re talking about going from 4 to 8 percent. You may or may not think that’s stabilizing, but the actual reality is, we’re in the midst of one of our most wonderful labor market recoveries, there’s been a big fiscal stimulus. And year on year, we’re doing 2.7 percent, which is very poor compared to our past performance.
You see a lot of recoveries where we grow at 4 percent or more just to get back to where we were. The growth engine has slowed down. There’s a lot of evidence — some of which I present in my other books — that technological progress has slowed down.
It doesn’t seem to me we’re close to the margin of growth being so fast that we’re thrown off the track. We have high level of debt in deficits, and we don’t know how to pay it off. And we’re cutting into our future capabilities with infrastructure and military defense, many areas, science.
Robert Wiblin: Inasmuch as you’re focused on economic growth in order to increase sustainability, it seems like a slightly odd focus, at least for an individual to take, if they wanted to increase sustainability because there are already such strong incentives that many people face to try to grow the economy because they earn money from it. They earn either labor income or returns from starting a business.
If a single person wanted to maximize sustainability of human civilization, would you recommend that they focus on economic growth? Or do you think that there’s more leveraged opportunities if they want to set aside making money?
Tyler Cowen: It depends on the person and what kinds of talents they have. But as I argued in my earlier book, The Complacent Class, there now seem to be so many people who are simply satisficers. They’re not very interested in innovating or even participating in a dynamic economy, and they just try to do well enough.
I’m here making a moral argument that at the margin, many, many people should be less complacent and take more chances. Personally, I will lower aggregate societal risk and do more to innovate, save more, work harder, in some way be more dynamic.
You can think of this and Complacent Class as two sides of a bigger picture. Complacent Class is like the sociology of what we’re doing and this is the moral side.
Robert Wiblin: It seems like another technology that you might be very interested in that could have big effects on the trajectory of human civilization, and potentially avoid extinction — although it also could be very negative — would be the capacity to redesign human motivation and our personalities through genetic engineering. We could potentially select our children such that they are, say, very pacifist, such that they don’t want to kill one another.
If you could get large take-up of this technology, that could potentially lower existential risk and get it very close to zero and give us more brighter prospects of surviving for a long time. On the other hand, the ability to redesign human personalities such that we’re so passive and will just accept dominance would potentially, again, facilitate totalitarianism and a very stable bad or neutral state. What do you think of that?
Tyler Cowen: I don’t think we know yet how genetic engineering will affect existential risk or even long-term growth. We don’t, at the moment, as you know, have the capabilities really to do that. As we develop them — if we do — we might have a better idea, but I think most people should be deeply agnostic and also somewhat worried about genetic engineering.
If you think we’re on an okay civilizational trajectory right now relative to the human past, and then we’re going to have this other major event, possibly more important than nuclear weapons, and our current trajectory is okay relative to the more distant past, probably we should be more worried than cheering.
That would be my take. But given that we don’t have it in front of us, it’s hard to say. It might all work out wonderfully.
Robert Wiblin: I guess you would think that, as that technology gets closer, it would be important to have people thinking about, “How do you regulate that? How do we make it applied well rather than badly?” That kind of thing.
Tyler Cowen: Of course we should think about that, but I’m not sure we’ll succeed in regulating it very well. There are many countries. Parents, I think, are willing to go to other countries. There will be black market versions of the technologies.
The regulation might fall to a least-common-denominator standard. Whatever we can do, I suspect we’ll end up doing one way or another, so I wouldn’t put too much faith in, “Oh, we’ll regulate out the bad versions and be left with the good ones.” We’re going to get some mix of the very good and very bad.
Robert Wiblin: Yeah, you’re always just pushing on the margin, trying to make it a bit more likely that we’ll use it in good ways and a bit less likely that we use it in bad ways.
Tyler Cowen: Yeah, sure.
Robert Wiblin: Are there any technologies that you can foresee over the next few hundred years that you think could end up being very important or could put humanity on a different trajectory, in the same way that perhaps nuclear weapons could have done that during the 20th century?
Tyler Cowen: I think changing the nature of human beings. You mentioned genetic engineering, but also just drugs. The opioid epidemic has grown much more rapidly than almost anyone had expected. We had long periods of time of technological stagnation in drugs because many of them were illegal, but that also means there’s a kind of low-hanging fruit.
Now, there’s more people can do in their own labs because of information technology. So one of my worries is that bad drugs get too much better too quickly, and we have many things like opioids that we can’t control, and that becomes a much bigger social problem.
Just the susceptibility of people to alcohol. We take it for granted, but so many lives are lost each year, so many careers ruined, so much productivity lost. One of my personal crusades is, we should all be more critical of alcohol.
People will pull out a drink and drink in front of their children. The same people would not dream of pulling out a submachine gun and playing with it on the table in front of their kids, but I think it’s more or less the same thing. To a lot of liberals, the drink is okay and the submachine gun is not. I think, if anything, it’s the other way around, and I encourage people to just completely, voluntarily abstain from alcohol and make it a social norm.
People will pull out a drink and drink in front of their children. The same people would not dream of pulling out a submachine gun and playing with it on the table in front of their kids, but I think it’s more or less the same thing. To a lot of liberals, the drink is okay and the submachine gun is not. I think, if anything, it’s the other way around, and I encourage people to just completely, voluntarily abstain from alcohol and make it a social norm
Robert Wiblin: If we’re able to design better and better addicting substances, drugs or, perhaps, computer games or whatever else, it’s kind of the case that the Mormons will inherit the earth, or whoever is most resistant to those temptations and still wants to have children, even despite the fact that they can just shoot up on heroin.
Tyler Cowen: That’s right, so I try to encourage the productive people I know at the margin to be more Mormon, right?
Robert Wiblin: You mean have more children?
Tyler Cowen: Well, that too.
Robert Wiblin: Or avoid drugs?
Tyler Cowen: Avoid addictive substances of the wrong kind. Work, too, is an addictive substance, right?
Robert Wiblin: It seems like there’s a difficult tightrope here because we both want people to, in the short run, focus on growth and improving civilization and so on. But then we don’t want to lock in this value that it’s bad to experience pleasure because ultimately, we want to cash it out in something, which could involve using heroin or some much better future form of heroin.
Do you think it’s going to be possible to have a culture that supports that delicate balance?
Tyler Cowen: If you could have better drugs, but they didn’t destroy people, and they became the new intermediate incentive, like, “Innovate a new product, become a millionaire, and then you can afford to buy this truly wonderful drug that will be great on Sundays and won’t hurt your productivity.” That seems unlikely, but who knows?
Robert Wiblin: What is your vision for the long-term future? Do you see it as we’re going to have growth and then some kind of plateau? Or going to go up and down? Or will it just continue rising forever?
Tyler Cowen: I don’t think the rate of growth will rise forever. My view of economic history is that growth comes in spurts. It’s not an evenly managed process, though it was for part of the post–World War II era.
You have a thing called general purpose technologies, one of those being fossil fuels plus machines, which became significant in the 19th century, and then you have the big growth spurt. You do everything you can, say, with fossil fuels and machines: you get cars, you get planes, electricity, powerful factories. But at some point, your cars only get so much better. And then you wait for the next big breakthrough.
The next set of big breakthroughs may well involve the Internet, artificial intelligence, Internet of things. They are not quite here yet. You see many signs of them. They don’t yet make the growth rate much higher, and then you will have a big period of explosive growth and then a slowing down again. That’s my basic model.
Robert Wiblin: I was thinking less about growth and more thinking of just the absolute level of the economy or welfare in the universe, in the future. Do you think at some point it’s just going to level off because we’ll have done everything we can? We’ll have grabbed all of the matter we can access, and we’ll have figured out the best configuration for it to produce value. And at that point, it’s just a matter of milking it for as long as we can.
Tyler Cowen: No, I think the world will end before that happens.
I think at some point, there’ll be a new phase where we can directly make people in some way happier or more fulfilled or be more the people they want to be by manipulating something inside the brain. We do that in very crude ways today with antidepressants or even Viagra — not manipulating the brain, but it seems to make people happier.
That will be an enormous breakthrough of sorts. It’s not right before us. I don’t even think it’s the next breakthrough, but it seems at some point it will be possible.
Once we exploit that frontier, it seems to me the game will be about numbers — just having more very happy, very fulfilled people, and we’ll turn our attention to making higher numbers sustainable. I don’t see any obvious limit to that process. I do think the world will end before we complete that process, I don’t think we’ll ever leave the galaxy or maybe not even the solar system. But at some point it will just become a numbers game.
Robert Wiblin: Why do you think that we won’t leave the galaxy? And also, even if you think that that’s improbable, just given the fact that almost all of the potential value that we can generate is outside of this galaxy because that’s where most of the matter energy is. Shouldn’t we be pretty focused on that possible scenario where, in fact, we do leave the galaxy?
Tyler Cowen: I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.
Robert Wiblin: What do you think is the probability that neither humans nor some kind of successor species exists in a hundred years or a thousand years or ten thousand years?
Tyler Cowen: A hundred years, I think it’s extremely small. It would be whatever is the small chance of some kind of galactic catastrophe, very small.
A thousand years, I think there’s at least a 10 percent chance. Not that every single human is dead, but that we’ve returned to some earlier, much poorer stage that’s quite destructive. And maybe the earth is ruled by roving bands which are violent, a kind of Mad Max scenario. It seems to me, the chance of that is reasonably high, way too high.
Robert Wiblin: If you’re saying that there’s a high probability that humans will still be around in a hundred years, I guess that suggests that you think that’s a very low annual risk of nuclear war? Why is that?
Tyler Cowen: I’m not sure what you mean by very low. I think it’s below 1 percent.
Robert Wiblin: Yeah, I think so too.
Tyler Cowen: I don’t know if that counts as very low, but again, it’s going to happen sooner or later. And how stable is it if you just trade one nuke back and forth, two countries? We don’t know, right? It’s never happened. I think the chance that that happens within 30 years is easily, say, 5 percent?
How destabilizing it will be? Do you have an immediate global financial crisis? Or do markets just react like, “Yeah, yeah, yeah.” Some currencies go up, some go down. It’s a terrible tragedy, but for most of the world, kind of-sort of life goes on after these terrible tragic deaths. We don’t know.
Robert Wiblin: Someone came to me, and they were asking for advice, as they sometimes do, on what can they do to improve the long-term future? They were deciding between increasing economic growth and, say, working to prevent a nuclear war or great power war between the U.S. and China.
I would almost always recommend that they work on the latter because I feel there are far fewer people who are working on that problem, so it’s substantially more neglected. What would you have to say to them?
Tyler Cowen: I definitely recommend people working on lowering the risk of nuclear war. One of my dissertation advisors was Thomas Schelling who, of course, is the classic theorist of nuclear war. Nuclear weapons, to me, are always the number one issue.
But that said, even if you sat down and said, “I’m going to do my best to limit nuclear war,” I don’t know what that means operationally. If you’re a president or in a parliament or maybe if you had a particular nonprofit, but I’m not sure disarmament is the answer.
Whereas to boost the rate of economic growth, there’s plenty that most people can do in that direction. I wish we had more good avenues for lowering the risk of nuclear war. I’d be very keen to hear about them. We’d actually be keen to support them with the Emergent Ventures fund.
Robert Wiblin: It seems like, given what you’re saying, that it’s likely that humans will go extinct before we manage to escape from this galaxy, or maybe even from the solar system. And that the reason for this is that, primarily, we’re going to be unable to coordinate between countries and individuals to prevent conflict that would destroy us.
Your top priority would be figuring out ways to coordinate humans better, and indeed that is a really high priority for people in the effective altruism community and many people who are working with this long termist framework elsewhere. Do you think you that you might want to write a book about how to improve coordination and international cooperation in the future?
Tyler Cowen: Maybe. That may not be an issue that’s good for a book, of course. Some issues you write about but not necessarily in book form.
It still seems to me that education is a net positive for coordinating people and limiting their desire to slaughter each other. I understand it’s not always the case — a lot of the Nazis were well educated and so on. But still, on net, I think it’s a positive force.
Growth and education tend to come together. If we’re growing more, we can afford more education, we can do more to support education in poorer countries. So I still think economic growth is at least a partial, indirect means to some of those ends. Again, it’s something that’s easy to concretize. You can, to some extent, measure it. You know when you’re failing. And that makes it more useful than some other kinds of advice that maybe I still would truly fully support.
Robert Wiblin: I guess I viewed the invention of nuclear weapons as perhaps the most important moment in human history. Just look at around that time . . .
Tyler Cowen: I hope it ends up not being the most important moment in human history.
Robert Wiblin: Fingers crossed. Around that time, say, during the ’30s and ’40s and ’50s, the Soviet Union under Stalin had incredibly fast economic growth. People were moving from farms to factories. The Soviet Union was becoming substantially more powerful and a stronger military power and developing the ability to, in the future, build nuclear weapons of its own.
Do you think watching that in the ’30s and ’40s, we should have been glad that the Soviet Union had a fast rate of economic growth? Or should it have, on balance, concerned us? Both because it would potentially lead to more conflict between countries because you have more great powers, and also, because the person who was leading the Soviet Union was not a very nice guy.
Tyler Cowen: Of course, it should have concerned us, but on net, it was obviously a huge plus because the Soviets stopped the Nazis. But keep in mind also, my wife and daughter were born in the Soviet Union and grew up in a wealthier society. My father-in-law, who still lives with us — he was alive during the time of Stalin, and his life was better. He’s still alive today because Soviets had a higher rate of economic growth.
Soviets urbanized, probably more rapidly than China has done lately — that’s not a well-known fact. The world discovered a lot of talent through that urbanization and people being brought into formal education.
So it had a lot of benefits since Stalin didn’t wipe us out and he beat the Nazis. If you’re looking for any case where a higher rate of growth had a big pay off, I think it’s that one. That’s not the counterintuitive case.
Robert Wiblin: I feel like, ex post, it definitely looks good, but at the point where the Soviet Union got nuclear weapons, I might have said, looking back, I wish that it had not become wealthy that quickly. Because now we have a nuclear standoff, and in 1948 or 1949, you don’t know how stabilized that situation was going to be. Looking forward you might think there’s really a very substantial probability of humanity destroying itself during the Cold War.
Now looking back we can say, “Well it wasn’t so severe.” But you might have thought, actually it would be better if there was just one country — given that we have nuclear weapons, what we really want is just that one country that’s going to a hegemon and dominate the world so that it won’t be a nuclear war, and we can kind of have permanent stability. What do you think of that?
Tyler Cowen: I don’t think we understand stability and nuclear weapons very well. Do keep in mind the two times they’ve actually been used is when only one country had them. It doesn’t mean we have a fully general theory there.
Nuclear weapons have spread, actually, at a slower rate than many people have expected. You read geopolitical theorists after the end of World War II — a lot of them think there’s going to be another nuclear war really soon, and we tend to dismiss them like, “Oh, those silly people, you know, they were just paranoid.” But maybe they were right, and they got lucky, and that’s the true equilibrium. I don’t think we should reject that view.
That gets back to an underlying issue with a lot of claims in the book. If you really think the chance civilized society might end or be defeated quite soon, you can’t look to any kind of long-term horizon to decide what is better, and you’re left with a kind of brute deontology for making choices. When that’s the correct scenario, it’s not about growth maximization, so I would accept that caveat.
Robert Wiblin: There’s this interesting thing that, if you think that the risk of extinction is extremely low, though nonzero, then you should place extremely high value on the future because it is an expectation it’s going to last a very long time, and we have a high chance of colonizing a significant fraction of the universe, so that saves an argument for long termism.
On the other hand, if you think that the risk of extinction is actually quite high — perhaps like 1 percent a year or something like that — then it’s true, if we managed to avoid extinction this year, then the benefit that we get from that is not so great because there’s still a good chance that we’re going to destroy ourselves in the future.
But the risk is so high, like 1 percent every year, that there’s probably a lot that could be done to lower that. So it’s potentially a more tractable problem because it’s a bigger problem to begin with?
Tyler Cowen: Yes.
Robert Wiblin: Do you have any thoughts on that?
Tyler Cowen: Let’s imagine it were the case that somehow we actually knew that, if we could construct hobbit society, but with people being taller, say, the world would not end. And if we don’t construct hobbit society, the world will end, say, through nuclear weapons. Let’s say we knew that or we thought 70 percent chance that’s likely to be true.
I still don’t think we actually are good at implementing the means to bring about hobbit society. We would have to become brutal totalitarians. If anything we might accelerate the risk of this nuclear war.
So when you think of the feasible tools at our disposal, that’s kind of outside our current feasible set, hobbit society. We’re on this path, I think we have to manage it. We can’t just slam the brakes on the car — it’ll careen off the cliff.
Our best chance is to master and improve technologies to make nuclear weapons, warning systems, second-strike capability, safer rather than riskier. I just think that’s the path we’re on, and the hobbits are not there for us.
Robert Wiblin: Perhaps if I had to summarize my overall world view in just one quote, it would be this quote from E. O. Wilson: “The real problem with humanity is the following: We have paleolithic emotions, medieval institutions, and god-like technology. And it is terrifically dangerous.”
This highlights my concern with the idea that we ought to increase economic growth which seems to push more on the god-like technology than on improving the paleolithic emotions or the medieval institutions. By focusing on improving technology, we’re increasing the disconnect between the improvement that we’ve had in our engineering and scientific and technological ability and the fact that our personal and moral values and our institutions for governing ourselves have not kept up with that.
So I’d be perhaps more interested in seeing people focus on the emotions and institutions here to get them to catch up with our god-like technology than increasing the technology itself. What do you think of that?
Tyler Cowen: I’m more optimistic than Wilson and perhaps you. He refers to medieval institutions, but in most countries institutions are much better than that. What are the good medieval institutions that stuck around? Like parliament of Iceland? Oxford, where you’ve been? I suppose Cambridge? Maybe a few other schools, but we’ve built so much since then. I don’t mean technology. I mean quality institutions with feedback and accountability.
If you look, say, at how Singapore is run, a lot of the Nordic countries, some parts of American life — by no means all, just to be clear — Canada, Australia, where you’re from. You see remarkable institutions, unprecedented in human history, I don’t take those for granted they’re not automatic. But I think one has to revise the Wilson quote and be more optimistic.
Robert Wiblin: Yeah, so medieval institutions is perhaps and exaggeration. But do you think . . .
Tyler Cowen: But it’s a significant exaggeration, right?
Robert Wiblin: Okay, I think it’s the case that probably political institutions and our decision-making capacity isn’t improving as quickly as our technological capabilities. And I wish it were the other way around that our wisdom and prudence and ability to make decisions that are not risky was maybe moving faster than our technology.
Tyler Cowen: But see, I see it the other way around. If you look at data on economic growth, you see huge productivity improvements: China, India, basically free-riding on existing technologies, not usually making them better. It’s just managing companies better, having better incentives in companies.
If the world economy grew 4-point-whatever percent last year, way more of that, say, 4.8 percent, is coming from better management, better institutions than is coming from new technology. Maybe 1 percent of it is coming from new technology and the rest from better management — in some cases, growing population, capital resources.
So institutions are way out-racing technology right now. Again, I’m not taking that for granted, but I think people would be much more optimistic if they viewed it in that light.
Robert Wiblin: We’re both big fans of Philip Tetlock and his work . . .
Tyler Cowen: Sure, me too.
Robert Wiblin: . . . in superforecasting. I think it’s among the most important work on social science and some of the most impressive and interesting work that’s ever been done. Do you think it would be valuable to get much more effort going into improving decision-making in that form? Rather than perhaps working on science and technology otherwise? Do you think we underinvest, sometimes, in social science relative to technical sciences?
Tyler Cowen: I think we underinvest in particular kinds of social science. Too many social scientists are overly specialized. They don’t read outside their disciplines. They don’t have the incentives to do something the world, as a whole, will find useful. Tetlock is a wonderful, shining counterexample to that.
Academic incentives are working less well than I think they did 20, 30 years ago, including in the social sciences, and that we need to fix. It will be very hard to do because existing structures are tightly locked in.
Robert Wiblin: Let’s take a step back in time, back to 1900. I imagine that, if you were alive then, you would say that the risk of human extinction in the next hundred years would be fairly low.
But then in the ’40s, we had the shock where we developed nuclear weapons, and suddenly, I think we would both agree, the risk of human extinction or the collapse of civilization went up quite substantially because, for the first time, we had the ability for one person or one country to basically wipe out most humans alive at the time.
What do you think are the chances that, in the 21st century, there’ll be some new breakthrough that’s analogous to nuclear weapons that will, again, give a level-shifting annual risk of human extinction?
Tyler Cowen: The possibility that worries me the most is simply an equivalent amount of power being more portable. I don’t think it has to be a new technology. It certainly might be, but simply the cost of a nuclear weapon or something like it being much cheaper. Bio weapons — they’re very hard to carry around and deploy, but you can quite readily imagine that becoming easier. It seems to me those are likely outcomes.
But terrorism in general, I don’t think we understand well, so after 9/11, people thought there would be many more attacks. You could ask questions, “Why don’t they just send a few people over the Mexican border? They get here, they buy submachine guns, they show up in a famous shopping mall and they take out 17 people. They don’t get any further than that, but it’s a massive publicity event. And this just happens every two or three weeks.”
A priori, it almost sounds plausible, but nothing like that has happened. If anyone has done that, it’s our native, white Americans who are not, in the traditional sense, terrorists. It’s clearly possible, but they don’t do it. So when you ask, how likely is someone to do something pretty horrible with a pretty cheap decentralized, highly destructive technology? We don’t even see them acting at the current frontier of destructiveness.
What you need in terms of people who are competent enough, motivated enough, coherent enough, have a base to operate from. How hard is it in a combinatorial sense for all those to come together? We don’t know, but I think thinking about it more, you become a little more optimistic rather than less.
Robert Wiblin: I think that’s fair, but it seems like over time, as our technology gets better, the number of people and the amount of expertise and the amount of security that you would need in order to pull off an operation like that is going down and down and down. Eventually, it could end up being a handful of people or even a single individual, and perhaps breakthroughs in biology are the most likely cause of that.
Did you think, perhaps, that the annual risk of human extinction is going down or up? There’s varying factors here, and I guess, the improvement of technology in that sense is one thing that’s pushing it up. Though, I suppose we could also invent technologies that might give us the ability to prevent that from ever happening.
Tyler Cowen: I think it will go up over the next century, I don’t think it’s going up right now.
I once asked some of my friends an interesting question: If a single person, by a sheer act of will that they had to sustain for only five minutes, could destroy a city of their choice, how much time would have to pass before one individual on Earth would take the action to destroy that city? Is it like it would occur in two seconds, it would occur in 10 minutes, it would occur within a year? I don’t think we know, but no one should be optimistic about that scenario.
Robert Wiblin: Let’s say that humans do continue for thousands, perhaps millions of years, but for some reason, we decide to never leave Earth. So we don’t use the resources that are available elsewhere.
Tyler Cowen: Which would be my prediction, by the way.
Robert Wiblin: Okay.
Tyler Cowen: I think space is overrated.
Robert Wiblin: Okay. It seems that, in your view, that should be a horrific tragedy, that almost all the value that humanity could have created had been lost in that case.
Tyler Cowen: Space is hard, right?
Robert Wiblin: I’m not so sure, but go on.
Tyler Cowen: It’s far, there are severe physical strains your subject to while you’re being transported, communication back and forth takes a very long time under plausible scenarios limited by the speed of light. And what’s really out there? Maybe there are exoplanets, but when you have to construct atmosphere, there’s a risk diversification argument for doing it.
But simply being under the ocean or high up in the sky or distant corners of the earth, we’re not about to run out of space or anything close to it. So I don’t really see what’s the economic reason to have something completely external, say, to the solar system.
Robert Wiblin: It seems you’re okay with the idea that we can turn more matter and more energy into more value. So what is it? Five times by 10 to the 22 stars out there in the accessible universe at the moment? Literally, as the galaxies recede, it’s declining by about a billionth per year.
But if you’re in favor of growth and creating more value, it seems like almost all the value . . . No matter what you value, it has to be out there in all of that matter that we can reorganize. Given your desire for growth on Earth, I don’t understand how it could be the case that you wouldn’t be upset that we might just stop at the boundaries of Earth’s atmosphere.
Tyler Cowen: Oh, I’m upset about it, I’m just not very optimistic. If you put me in the legislature, I’ll vote to increase funding for space exploration. But relative — especially in the Bay Area — relative to other people I speak to in this kind of fringe group of intellectuals who think about space, I’m more pessimistic than just about all of them.
But it’s also that I’m more optimistic about the earth. The ocean of course is enormous — it could be platforms, it could be underwater. Deserts, places that can be terraformed, cities in the sky — you do want diversification, protection against a big nuclear war. Maybe for that you need other planets. There’s the moon, there’s Mars — they’re actually big enough to have diversification.
Robert Wiblin: But it does seem like no matter how hard we go on Earth, at some point we’ll have found the best configuration we can make for all of the matter and the energy that we can harvest here. And then in order to continue growing and avoid a plateau, which is terrible in your view, the only path is to go out.
There’s this beautiful thing, that once we go out into space and we start colonizing, then we get cubic growth because we’re growing like a sphere.
Tyler Cowen: I’m never going to vote no on that, but just some cautionary notes: There is a history of imperialism, where mostly European societies have grown and taken over other parts of the globe, and they did not in every way do maximum good, to say the least. I worry about how we might treat societies we encounter. We also may draw attention to ourselves as a target or a threat.
I’ll still vote yes on the expenditures, but I don’t view it, by any means, as this huge net positive. It’s something I also worry about a good deal, and I also think our corner of the galaxy will be wiped out before we get that far.
Robert Wiblin: It seems like, to make this view stable, you’re thinking that the probability of extinction is high, such that you’re pretty confident we’ll never go to space, but it’s not high enough that the overwhelmingly important thing is to work on extinction right now. Or maybe you do think what we should do is lower the risk of a catastrophe, but the best way to do that is via increasing the growth rate.
Tyler Cowen: Yes, and let’s say your modal scenario is everything ends in 10 thousand years. That’s still a long enough time horizon where the long-term results of higher growth now are very significantly positive for billions of humans. That will play the dominant role in a moral calculus.
But the idea that somehow we’re going to be sitting here three million years from now, and I’ll have my galaxy and you’ll have yours, and we’re not even human anymore. It’s not even recognizable as something from science fiction. I would bet against that if we could arrange bets on it.
Robert Wiblin: Why do you think that we couldn’t develop kind of self-replicating probes? I agree that humans are not going to travel to other galaxies — that’s way too hard. But at some point, we should be able to create intelligence that somewhat resembles humans — or might even be better than humans — in some form that’s easier to transport through space on computers or whatever the future example of computers is.
That kind of intelligence would have a much better shot at spreading to the stars. It can travel much faster, it’s much more resilient, and then it arrives there and starts creating more copies of itself.
Tyler Cowen: We don’t see self-replicating probes from other parts of the universe. Now maybe we are those self-replicating probes, in some way, right? We were superseded. But the fact that we don’t, in an obvious way, see them, to me strengthens the case for pessimism.
Robert Wiblin: Yeah, but you probably have read this paper that came out of the Future of Humanity Institute.
Tyler Cowen: Yes, the Fermi Paradox is not nearly as absolute as people used to think, but it’s still an issue. You still should update, in Bayesian terms, that you don’t see the aliens.
Robert Wiblin: Yeah, I agree that’s worrying, but because we have these alternative explanations that we might just be the one chance event where life began, I still had some hope that we’ll get there, we’ll be the first ones to colonize at least this part of the universe.
Tyler Cowen: I’m going to vote with you, that’s all I can say for now.
Robert Wiblin: Cool. You’re in favor of markets and kind of liberal governance but I see two arguments here that might justify an alternative approach. One is, you’re in favor of faster economic growth. Essentially, planned economies in the past have been able to reinvest a much larger fraction of GDP and future growth just building more factories rather than producing consumer goods than market economies have been able to.
So perhaps, you might be interested in having some greater central planning of the economy that would allow us to do much more investment through science and technology or perhaps physical capital that will allow us to increase the economic growth rate? What do you think of that?
Tyler Cowen: We need to become more concrete, but the wealthiest societies in today’s world are, for the most part, the freest ones. There’s no guarantee that will always hold, but I think that’s an argument for some kind of liberal freedom.
But if you look at, say, China since 1979 — yes they grew because they became significantly freer, but I suspect they also did better keeping some elements of Communist Party rule in place than if they had, say, followed the advice of western reformers. And I think for the most part — not on every decision — but did the right thing. I think we need to recognize that.
But that’s not a centrally planned economy either — it’s because they gave up central planning. But nonetheless they spent very heavily on infrastructure and still do, and that, in large part, comes from the government.
Robert Wiblin: We don’t want to give up the benefits of market, absolutely, but, I guess you’d be fairly happy if the United States spent quite a lot more in science and technology research and perhaps the government built lots more infrastructure?
Tyler Cowen: I would spend much more on science and technology. When you say infrastructure, I want to disaggregate, but there are certainly plenty of things I would be willing to spend more on.
But the idea that you just throw a trillion dollar bill at infrastructure and what ends up happening is the senators from Wyoming have their say, and you just build a lot more roads and actually make climate change worse. And you don’t upgrade your power grid or do things very smart. I don’t want to just uncritically endorse infrastructure. That, to me, can be a negative.
Robert Wiblin: Yeah, I’m with you. Perhaps another argument for lower liberalism would be — you’re saying that, basically, you think there’s a high chance that humans are going to drive themselves to extinction. The reason is a lack of coordination in conflict . . .
Tyler Cowen: And cheap energy.
Robert Wiblin: Okay, and cheap energy. Too much power in the hands of people and the ability to destroy one another. This is a very severe problem. Perhaps, in order to solve that problem, we should be willing to have a world government. Kind of run towards a singleton, as Nick Bostrom calls it, which would be like having one decision-making process that is able to control everyone else, prevent conflicts.
Even if it doesn’t produce the optimal decisions, at least we won’t have extinction. We’ll be able to survive for a lot longer and generate some more value even if the singleton doesn’t make the absolute best decisions that we might think of from a liberal point of view. What do you think of that argument?
Tyler Cowen: It’s hard enough to get the European Union to stay together, and those countries have so much commonality of interest. I expect some further nations, after Brexit, to peel off over time. Try to get Southeast Asia to agree even to a local ASEAN being much stronger, being an EU-like phenomenon — simply impossible. It’s a recipe for creating conflict.
I understand the appeal of the vision. I’m all for NAFTA. I like multilateral institutions, but I think it’s the wrong way to go. The UN is of some use but in many ways an impotent bureaucracy. You would not want it ruling over us. You tend to recreate some of the worst aspects of national bureaucracies and then infuse them into a least-common-denominator sort of politically correct institution that’s just not very effective. So I think that’s the wrong path overall.
Robert Wiblin: But can you imagine that, perhaps, we convince many people of this kind of long termist framework. They share our belief that extinction is very possible and will be a terrible catastrophe. So they’re willing to make many concessions perhaps.
If you can get China and the United States to band together to say, “Our top priority is avoiding extinction and war, so we’re going to work together very closely.” Not to control everything, just to control access to the kind of technologies that would potentially produce human extinction. Could you see in the next 100, 500 years that kind of cooperation to make humanity more stable and civilization continue?
Tyler Cowen: There’s already a great deal of international cooperation on nuclear weapons. Right now, we’re trying to manage the North Korean situation. Cooperation is highly imperfect, but it’s remarkable how much is there. When the Soviet Union was collapsing and there were possibly loose nuclear weapons, there was a good deal of international cooperation to deal with that problem.
So we have very immediate successes near us. Could we do better? Absolutely, but the idea of there being this general public movement where you get people to do the right thing by scaring them, I think that’s the opposite of how politics usually works. Voters like to live in denial, and if you scare people too much with, say, climate change, they respond by thinking it’s not actually all that significant.
I think some kind of more positive vision — you’re more likely to get people on the sustainability bandwagon. That’s one of the backstories to my book: I’m trying to give a positive vision, emphasizing less scaring the heck out of people and more, “Here are the glories at the end of the road, what you can do for your descendants and world history.” Scaring people seems to backfire in politics.
Robert Wiblin: We’ve been talking a lot about the possibility of a nuclear apocalypse here, and that is a somewhat trickier one to figure out how to solve. But you bring up climate change, where it seems like it’s a lot more tractable.
It’s pretty clear what kinds of technologies you could work on in order to reduce the risk of really runaway climate change if we get unlucky. Do you think it’s particularly valuable for people to go and work on technologies that differentially reduce the risk of catastrophes like climate change?
Tyler Cowen: Oh, absolutely. I think, the last few years, a lot of those technologies have made more rapid progress than I would have thought — like electric cars, like fracking. Just the interest in China in cutting back on their air pollution, solar, nuclear. Some of it’s still on the drawing board, but I think they really intend to do it and probably will.
So the progress in the fight against climate change, even in the last few years, is much higher than people think, even though we don’t see the results yet in terms of measurements of carbon emissions. I wouldn’t quite say I’m an optimist, but there have been big gains in the immediate past.
Robert Wiblin: Are there any other technologies that you’re excited about because they differentially improve civilizational stability?
Tyler Cowen: Well, everyone talks about batteries, but I often feel batteries are a mixed blessing. Batteries, of course, would make it much easier to have green energy, but batteries also ease the decentralized storage of power and carrying around of destructive power.
If, instead of a gun, which is awful, but it’s hard to kill 1,000 people just shooting a gun, right? If you have some kind of pack on your back with a battery and then an energy-creating weapon that you just walk around with, and you have crazy people doing this the way they do now with guns — that worries me. I still think, on net, better batteries are a plus, but it cuts both ways.
Robert Wiblin: It seems like the most important technology, from your point of view, might be the ability to surveil people so that we can prevent any group from using really concentrated energy to end human civilization, but also the social technology then to regulate that such that it doesn’t lead to totalitarianism. Do you think research into something like that could be very valuable?
Tyler Cowen: I worry a great deal about surveillance, which, of course, has proceeded most rapidly in China. If surveillance really would make us safer, that would be an argument for it. But surveillance tends to corrupt your rulers, and it tends to increase the returns to being in charge. I think, over time, it increases the chances of, say, a coup d’état or political instability in China.
Even though you have more stability at the ground level, you may have less stability at the top. I think this is one of the two or three biggest issues facing the world right now: What are we going to do with surveillance and AI, facial and gait recognition? I don’t think we know what to do. I would say I more worry about it than applaud it.
Robert Wiblin: I think I’m with you. I’m not sure whether more surveillance or less surveillance is better right now. But it seems like finding better ways to govern surveillance, given that we’re probably going to have quite a lot of it, so that it doesn’t lead to these negative political outcomes, could be an extremely important research question that more think tanks should be looking into.
Tyler Cowen: Yes, and it’s quite possibly true that the gains in surveillance we’ve had so far are what have limited some of the potential sequel attacks to 9/11. We can’t know that for sure as outsiders, but many people suggest this is probably the case.
Robert Wiblin: In terms of ways that humanity might end, we’ve got nuclear war, just a great power war between the US and China, even setting aside nuclear weapons. We’ve got climate change, and I guess, where we’re both concerned about, new technologies that would really concentrate energy that would allow a lot of destructive power.
What do you think about perhaps a fifth one on that list, which is a negative global totalitarianism because technology allows a negative political order to stabilize itself by monitoring people too much?
Tyler Cowen: Well, that may happen in China. One scenario is the Chinese government will simply clamp down on opposition through surveillance, and that will be stable for a very long period of time. That might make society in China worse, but I don’t see why it’s destabilizing, even if it’s undesirable.
Robert Wiblin: Oh, no, I don’t think it would be destabilizing. The problem is it would be very stable, but bad. We would still lose most of the value because, perhaps, we’ve locked in a bunch of negative or neutral moral values.
Tyler Cowen: That’s one of my big worries for the forthcoming future. You mentioned climate change. I don’t think it’s an existential risk; I do think the expected costs are maybe higher than most people want to admit, but the notion that it would wipe out human civilization as we know it, you would need a very extreme scenario. I don’t think that’s very likely.
Robert Wiblin: I agree with you. Yeah, I think it’s unlikely that climate change could lead to extinction. I mean, maybe it could lead to significant loss of life. One possible way it could go is that it turns out to be the temperature increases much more than we expect, so we get more like 6 to 9 degrees of warming. This sets us back economically, which triggers a negative cascade of consequences.
Tyler Cowen: Sure. That’s the most worrisome scenario. Keep in mind that regular air pollution — and I don’t mean carbon-based — just air pollution, right now it kills 6 to 7 million people a year. Obviously, a large number. Now, some of those are older people, are frail people. They might have died soon anyway, but still, it’s a number hardly anyone talks about.
Climate change right now is not killing 6 to 7 million people a year, and this we just absorb and move on. As you indicated, a lot of the risk of climate change is how it might set off other kinds of conflict.
Robert Wiblin: Yeah, or just the tail that we’ve totally mismeasured how it’s going to affect the climate. I think this is relatively unlikely, but perhaps is still worth having some people worry about.
Tyler Cowen: Or we could do bad geoengineering and make the world too cold.
Robert Wiblin: Okay, let’s move on to some other things in the book that I wasn’t entirely convinced by. You make the argument in one of the chapters that, even though our actions seem to have very large and morally significant effects in the long run, that doesn’t necessarily mean that we have incredibly onerous moral duties. We don’t necessarily have to set aside all of our projects in order to maximize the growth rate of GDP or improve civilizational stability. What’s your case, there?
Tyler Cowen: Well, I do think you have an obligation to act in accordance with maximizing the growth rate of GDP, but given how human beings are built, that’s mostly going to involve leading a pretty selfish life: trying to earn more, having a family, raising your children well. It’s close to in sync with common-sense morality, which to me is a plus of my argument. What it’s telling you to do doesn’t sound so crazy.
You don’t have to re-engineer human nature. So if someone from more of a Peter Singer direction says, “Well, all the doctors have to run off to Africa,” people won’t do that. We can’t and shouldn’t coerce them into doing that.
The notion that, by living a “good life” but making some improvements at the margin, that that’s what you’re obliged to do, I find that very appealing. It’s like, “Change at the margin, small steps toward a much better world.” That’s the subheader on Marginal Revolution. It’s also a more saleable vision, but I think that it accords with longstanding moral intuitions, shows it’s on the right track.
Robert Wiblin: Yeah, okay. It seems like, given your framework of long-termism, the moral consequences of our actions are much larger than what most people think when they’re only thinking about the short-term effects of their actions. In that sense, the moral consequences should bear on us more than they otherwise do.
Tyler Cowen: It’s very tricky, though. If you go around telling people, “Everything you do is going to change the whole world,” they’re going to get pissed off at you. They’re going to tune you out, so there’s a Straussian undercurrent in the book. The long term is really important, but people still need to focus to some extent on the short term to get to the long term. They can only handle so much computationally.
It’s not that I think the right answer is for everyone to be so attuned to the exact correct moral theory. They’re going to use rules of thumb. We’re going to rely on common-sense morality whether we like it or not — even professional philosophers will, and that’s okay, is one thing I’m saying. Just always seek some improvement at the margin.
Robert Wiblin: On the view that increasing GDP is a very important thing for people to do — among the most valuable things that they could do — do you think that people who are taking holidays, for example, or people who just aren’t starting a business that would grow as much as possible, or perhaps people who could go work in think tanks and do economic reform that would increase GDP much more than what they’re doing right now, would you at least . . .
Let’s say that they do have some moral concerns, so you’re not so concerned about them misreading your argument or getting angry and rejecting it. Would you say to those people that they do have a duty to do what . . .
Tyler Cowen: Absolutely, and I try to encourage them all the time. I try to hire them into think tanks, research centers. It’s one of my goals in the more practical side of my life, so absolutely.
Robert Wiblin: Let’s say it were the case that the best way to increase growth or to increase civilizational stability was to give very large amounts of money or to give away most of the income that you had to very poor people who could earn a greater rate of return. Would you advocate for people doing that?
Tyler Cowen: Absolutely. I’m a big fan of private philanthropy. There are quite a few very wealthy people who have pledged to give away most of their fortune. I’m a big advocate of that. I’m not sure they’re all giving it away in the right manner, and maybe they do have an obligation to think more critically about how they’re giving it.
So of course, but keep in mind, giving money to poor people does not always increase the rate of return. Sometimes wealthier people can earn yet more with the money and give more away later, so it’s not that you should always redistribute now.
Robert Wiblin: Another argument you make is that you want to have a strong grounding for human rights, but it seems like, on this long-termist framework, it’s possible that the consequences of actions could be so vast that it would dominate any rights concerns.
Then you make this argument that uncertainty about the consequences of our actions gives us a reason to still respect human rights. Do you want to put that argument?
Tyler Cowen: Well, let’s give a concrete example. Right now, in the northwestern part of China, the Chinese government is creating camps and detaining large numbers of people, by some estimates up to a million.
Now, some people in the Chinese government say, “Well, this is going to help us in the longer run. We’ll be more stable. We’ll grow more rapidly.” I’m very skeptical of that, but you might say, “Well, there’s some chance they’re right.” I think it’s unlikely, but . . .
Robert Wiblin: But let’s say you believed that.
Tyler Cowen: There are gross violations of human rights. When there’s this deep uncertainty about the future, you’re not comparing directly — well, detaining all these people versus the brighter, richer future. It’s like a lottery ticket, and the lottery ticket is so uncertain, it’s easier to respect the human rights.
You say, “Well, look. These are gross violations of rights. There’s really not a guaranteed payoff at all. It’s highly uncertain, at best. It may even be destabilizing.” And then I’ll say, “Just don’t do it, and your consequentialist conscience is not knocking on the side of your skull so hard.”
Robert Wiblin: Yeah, but it seems like you’ve got massive uncertainty on both sides of the ledger here when you’re comparing the thing where you violate human rights but you get some massive GDP gain versus the case where you don’t and you don’t get the GDP gain.
It’s entirely plausible that both of those actions could be both very positive and very negative because the future is just so unforeseeable. But I don’t see why it breaks in favor of the human rights case rather than just increasing GDP because it’s better in expected value terms.
Tyler Cowen: Again, if you think there’s any case for deontology at all, there’s not an argument deontology can wield to overturn the consequentialist conclusion in consequentialist terms. You’re just stuck with, “Don’t do it.”
Nowhere in the book do I try to outline how far do those human rights extend. It’s partly beyond the sphere of my expertise. Also, I’m genuinely uncertain, but it seems to me that their sphere is not zero, not so absolute that everything, or even most things, are about deontology.
Robert Wiblin: I wasn’t sure whether to challenge you on this because I think, actually, it is good to promote the idea of human rights and to lock those into law.
I guess both moral uncertainty reasons — that it’s possible that violating human rights is just absolutely wrong and no number of consequences can compensate — and also just because it seems like a better rule to follow, that in fact it will lead to a better future, because GDP isn’t everything. Institutions matter a lot more, and concern for welfare as well.
Tyler Cowen: I fully admit I punt on the human rights issue. The book is about growth. I just want to reassure people, “You don’t have to go crazy and become an evil person to maximize growth.” But it would require another book, actually, a much longer one, and a lot of books should be organized around just one idea, one key idea.
Robert Wiblin: In the book, you say that you’re in favor of moral pluralism. So, there’s many different things that are morally valuable, and trading off between them might be hard. One thing that you mention is that you think that great art potentially has intrinsic moral value, regardless of the effects that it has on people’s conscious states.
I’m curious to know, what would you think of a case where, say, there was another universe where there was no life, there was no consciousness, but then by some crazy natural phenomenon, there was a planet that ended up covered with the great artistic artworks from Earth. No one ever got to see them, but they just arose naturally. Would that be better than the same universe, but without that artwork on that planet arising naturally?
Tyler Cowen: It seems to me it would be better. There’s more intrinsic value of some kind, but I would stress, none of the arguments in the book depend on that. It’s just another part of the bundle that might count as a positive.
Robert Wiblin: Yeah. Given that, really, all you can know is your own conscious states, how do you get this knowledge that art is intrinsically valuable, given that you can’t perceive the art directly?
Tyler Cowen: It seems to me there is something valuable about humanity reaching its highest potential, say, through the works of classical music or some of the greatest painters, that is not strictly reduceable to the number of people paying money for it or enjoying it at any point in time. At some points in time, that number may be zero.
Simply having achieved certain kinds of semi-perfectionist peaks, to me, is part of the pluralist bundle. But again, I think it’s important that we have arguments robust to those who are skeptical that that should count at all.
Robert Wiblin: Yeah, I agree that’s absolutely not directly from the book in general. I thought that you might say that the planet case is not so useful because it lacks the achievement aspect because it just arose through erosion or something rather than through anyone actually accomplishing anything, and it’s the accomplishment that you’re valuing.
Tyler Cowen: Well, both, but if there are beautiful natural structures, as there are, there may be intrinsic value to the natural beauty above and beyond who is able to see them at any point in time.
Robert Wiblin: Do you worry about this reliance on intuitions about the value of particular things? Or how we are to respond to particular cases is vulnerable to evolutionary debunking arguments that . . . It’s like, we think that streams are particularly beautiful or fertile lands look particularly beautiful.
It seems like we don’t really want to say that, in some fundamental, objective sense, all aliens, for example, ought to value the appearance of streams or paintings of natural scenes. That seems like a very idiosyncratic human thing rather than a fundamental moral principle. What do you make of that?
Tyler Cowen: Philosophers often overuse ethical intuitionism. Sometimes I’ll read a Philosophy and Public Affairs piece, and I’m always wishing they would write down axioms and argue for or against the axioms, but here’s one comparison they make, and then another.
You read through the piece. There are 17 different comparisons, and you’re all supposed to think about them a particular way because these intuitions are supposed to be obvious to us all. Those intuitions evolved in a Darwinian sense, and we should be skeptical about a lot of them.
If you’re trying to find what’s the intuition you should be least skeptical about, I would say it’s lives that are much richer or happier and full of these plural values to an extreme degree compared to other lives. Even there, we can’t be sure, but that seems a kind of ground rock. If you won’t accept that, I don’t know how there’s any discourse.
But who should get the kidney, or different pieces on abortion or redistribution — all these results, they seem to me quite sociologically class-specific in a way the philosophers themselves are not willing to admit.
Robert Wiblin: Yeah. There’s this big trade-off in philosophy between having a simple theory, a parsimonious theory that only has a few pieces, and then being able to match the common-sense intuitions we have about every case or about every claim.
I’m — in the field of philosophy, specifically — in favor of parsimony and against following common sense or having very complicated theories. In other domains, I think we need to use common sense and accept a loss of parsimony. Where do you fall on that spectrum?
Tyler Cowen: I’m a little closer to common sense than you are. It may not have much metaphysical standing, morally speaking, but the world is ruled by common sense. People behave in accord with common sense, so it’s probably counterproductive to stray too far from common sense.
A good ethical theory which has to have a practical component — it should be in accord with a lot of common sense, but revise other parts of it. You need both, and if the theory is either too much just matching the intuitions or totally overturning all of them, I get suspicious.
Again, this idea of “Revise at the margin.” It seems to me how we make progress in science, in business, so maybe it’s how we should try to make progress in ethics too. It has a pretty good track record.
Robert Wiblin: In this book, the influence of the philosopher Derek Parfit is clearly really vast. What do you think Parfit was most wrong about, and what do you think he was most right about that’s unappreciated today?
Tyler Cowen: Not too long before he died, Parfit gave a talk. I think it’s still on YouTube. I think it was at Oxford. It was on effective altruism. He spoke maybe for 90 minutes, and he never once mentioned economic growth, never talked about gains in emerging economies, never mentioned China.
I’m not sure he said anything in the talk that was wrong, but that omission strikes me as so badly wrong that the whole talk was misleading. It was all about redistribution, which I think has a role, but economic growth is much better when you can get it. So, not knowing enough about some of the social sciences and seeing the import of growth is where he was most wrong.
I think where he was most important is simply being the walking, living, breathing embodiment of the philosopher who is obsessively curious and will plumb the depths of any argument to such an extreme degree like has never been seen before on planet Earth. He was just remarkable, and that’s why he and his work have influenced so many people. I’m not sure which of his conclusions stand up, or even what his conclusions are. He’s not about conclusions; he’s about philosophizing in the Socratic sense. For that, he was just such a marvel. I wish more people could have known and seen and heard him.
Robert Wiblin: Yeah, he was hugely influential on me and a lot of other people. It’s a real shame he’s not with us anymore. Who do you see as his kind of philosophical successors? Because I think of Nick Bostrom at the Future of Humanity Institute. Perhaps Nick Beckstead, who wrote this great dissertation on the overwhelming importance of shaping the long-term future. Are there other people who you think we should now look to in lieu of Parfit?
Tyler Cowen: I’m big fans of the two Nicks you mentioned. I don’t think of them as substitutes for Parfit; I think they’re doing something quite different.
Nick Bostrom has an engineering mentality to his work that Parfit never did. Like, “What can we do? What should we do? How do we apply resources?” Maybe it’s the next step, but who is the next Socrates? We will see. Probably it will be someone from quite an unexpected corner, perhaps.
I would also mention two influential figures: Nozick and Rawls. Rawls influenced a lot of people, but when you read Rawls on growth and the future, it’s incoherent. Rawls is afraid of economic growth. At times, he seems to endorse a stationary state because any savings makes the first generation worse off, and they’re the least well-off people. That to me is a reductio on Rawls’s argument, the entire argument.
Robert Wiblin: To me, also.
Tyler Cowen: That should not have had as much influence as it has had, though it’s a wonderful book. You learn a lot from it, and Rawls was a very impressive figure.
On Nozick, I think his actual views, which you see in his later works, involve a clear understanding that economic growth is good, but in Anarchy, State, and Utopia — it’s all about deontology. That doesn’t work.
The actual intuition that drives the Wilt Chamberlain example — “Well, poor people spend money on Wilt Chamberlain. They enjoy it. Wilt earns money.” — is that this is part of some broader process of growth that will elevate many, many people, not just that it’s good if some poor youths can pay money to see Wilt play basketball. So he never brought out what really was making his argument stick.
Robert Wiblin: Yeah. Harsanyi had, I think, a much better version of the veil of ignorance argument that advocated in favor of something like total utilitarianism. And then, I feel, it was ruined by Rawls. But Rawls’s theory, it turned out, was much more popular and got a lot more play.
What do you think is going on there? Do you agree with me that people should go back to the original Harsanyi veil of ignorance?
Tyler Cowen: I agree. I don’t favor either veil of ignorance, but Rawls was at Harvard. Rawls was a professional philosopher. He was connected in the right way. His book came along at the right time, when people wanted a rationale for a particular kind of social democracy.
The way in which Rawlsian — like, the principles of liberty, and then maximin principle and what can be good for everyone — the way all those interact, I tend to think, is not coherent, and there are many sleights of hand in A Theory of Justice, not just the problem with economic growth and future generations and savings rates. It’s a brilliant book, how well he disguised those. It’s a kind of master class in the philosophy of disguise, is how I admire the book.
Robert Wiblin: \[laughs\] What’s the most likely way that the worldview that you’re presenting in Stubborn Attachments would be fundamentally wrong?
Tyler Cowen: If the pessimistic scenario is correct. History is cyclical, we’re going to undergo some kind of retrograde process. There will be some future — we’re not all going to die, but the amount of value in that future is not high enough for the option of continued growth through the future to be the dominant one deciding what it is we should do. There’s a pretty good chance that’s correct and I’m wrong.
Robert Wiblin: Let’s work to make that false. What would Tyrone have to say about the book?
Tyler Cowen: Well, I think Tyrone would endorse the pessimistic view, that the future is not so grand and glorious. It doesn’t have the moral power I attribute to it, and that we just ought to have more of a kind of Nietzschean scramble for the here and now, and there is no final adjudicator of these clashing values.
Morality becomes not so much deontological, but for Tyrone, it would become relativistic and almost nihilistic. That’s what Tyrone said to me about this book. He bugs me all the time. I try to shut him up, but I can’t do it.
Robert Wiblin: \[laughs\] Maybe we can get him on the 80,000 Hours Podcast sometime.
It seems like the argument is really robust, though, because even if there’s a 10 percent chance that the future will be very big and glorious, that should still loom extremely large in our moral vision.
Tyler Cowen: I agree, but let me make another argument against myself, and that’s the Pascal’s Wager argument. Let’s say sustained growth has a value approaching infinity, but the chance we can ever get there — it’s not 10 percent, it’s like 1 percent.
So if we’re not persuaded by Pascal’s Wager — a small chance of a really large payoff — in other contexts, like whether or not we should believe in the deity, maybe I’m offering the new version of Pascal’s Wager. Decay is really pretty likely, the future doesn’t extend that far. There’s like a 1 percent chance it does, and I’m trying to religiously preach to people to believe in that future, make it self-fulfilling, get people to believe, but I’m playing my own Pascal’s Wager game.
I’m not saying Pascal’s Wager is always wrong, but we know it’s problematic. It’s not convincing per se, and if you’re looking for problems in the book, maybe that’s another one.
Robert Wiblin: I have an episode with the philosopher Amanda Askell where we talk about problems with infinite ethics and Pascal’s Wager, which perhaps can stick up a link to. It raises a lot of issues that we don’t really have time for.
Maybe the fundamental, and indeed, insoluble problem of philosophy is how to integrate the claims of nature with the claims of culture. They’re such separate spheres, but they interact all the time.
The final appendix B of my book, I talk about this problem. How do you weight the interests of humans versus animals or creatures that have very little to do with human beings. And I think there’s no answer to that. The moral arguments of Stubborn Attachments — they’re all within a cone of sustainable growth for some set of beings. And comparing across beings, I don’t think anyone has good moral theories for that.
Robert Wiblin: But it seems like on your view, you should think that, while we don’t know what the correct moral trade-off is between humans and animals, there is a correct moral trade-off. It’s just very hard to figure out what it is.
Tyler Cowen: I’m not sure what we would make reference to to make that trade-off. There’s some intuitionism, like gratuitous cruelty to animals — even not very intelligent ones — people seem to think is bad. That’s easy enough to buy into.
Robert Wiblin: But you support interpersonal aggregation across humans. Then it just seems like there should be a similar principle — though more difficult to apply in practice — that would apply to a chimpanzee and a human?
Tyler Cowen: We’re very far from knowing what is. But chimpanzees are pretty close to humans. That strikes me as quite possible. But if you’re talking about bees and humans . . . What if another billion bees can exist, but one human has to have ongoing problems with migraine headaches? My best guess is we will never have a way of really solving that question using ethics.
Robert Wiblin: Yeah. I agree that the practical problem gets very severe when you’re comparing humans and insects. But I think, in principle, the solution follows the same kind of process as when you’re comparing humans and other humans and chimps.
Tyler Cowen: I’m not sure the practical problem is different from the conceptual problem. I think it’s a conceptual problem, not a practical one. We could hook up all the measurements to those bees we want, and at the end of the day, whether a billion of them is worth a migraine headache for a human . . .
Robert Wiblin: But you say you’re a moral realist. Shouldn’t there be an answer then?
Tyler Cowen: I don’t think there’s an answer to every question under moral realism.
Robert Wiblin: Is it possible that murdering nonhuman animals might be morally prohibited in the same way that, because of human rights, it’s morally prohibited to murder another human?
Tyler Cowen: Oh, of course.
Robert Wiblin: Even if that’s unlikely, and it sounds like you think it might not be that unlikely just because of moral uncertainty. Because it might be morally prohibited and might just be really terrible, we should abstain from murdering animals.
Tyler Cowen: As much as we can. Yes.
Years ago, you wrote that, in order to enforce a level of epistemic humility on yourself, which you think is appropriate, you tried to be extremely reluctant to move your credences out of the range of 40 percent to 60 percent, on controversial issues at least.
I found that that really stuck in my head for many years and became a bit of a rule of thumb to me, that when I see my credences moving out of the 40 to 60 percent range, I have to stop and really pause and think about whether the evidence is strong enough. Do you still try to follow that principle? And if so, how do you go about it?
Tyler Cowen: I try all the more. I think the best way to go about keeping epistemic humility is to try to write out the arguments of the side you disagree with it. In part, I use Marginal Revolution as a vehicle for that. It’s a selfish use for me.
Robert Wiblin: You think that there’s many plural values. Is it nonetheless possible that the best future would involve just maxing out on one of those values because it’s more efficient to produce that value than the other ones? You can get more bang for buck?
But if you think about a life in the repugnant conclusion, well, you’re alive for a few minutes, someone feeds you a potato, you hear some Muzak, and you pass away. Well, isn’t that better than nothing? In my view, those are not human lives as we understand the terms, even if they look like humanoid beings.
So it’s getting back to the question of comparing a billion bees to one person having a migraine headache, and I just don’t think we can do it. That moral realism can’t handle utility comparisons across very different kinds of beings.
Robert Wiblin: Yeah. I feel like a weakness of the repugnant conclusion kind of thought experiment is that it ties together multiple issues. One thing that people don’t like about it is the blandness of it, the stability of the welfare that people have in that world.
If you imagine where we could be in a repugnant-conclusion kind of world, where humans — we have ups and downs. On net though, our lives may be only weakly positive, but people don’t say it’s terrible that there’s more people that have lives of the kind that we do or not desirable for civilization to continue just because our lives could be much better in principle.
We could imagine beings that have 100 times the welfare that we do over their lives. By comparison to them, this world is kind of the repugnant conclusion, basically.
## https://www.the-american-interest.com/2020/01/15/the-world-according-to-tyler-cowen/
I think addiction is an underrated issue. It’s stressed in Homer’s Odyssey and in Plato, it’s one of the classic problems of public order—yet we’ve been treating it like some little tiny annoyance, when in fact it’s a central problem for the liberal order.
## [http://brownpoliticalreview.org/2019/10/bpr-interviews-tyler-cowen/](http://brownpoliticalreview.org/2019/10/bpr-interviews-tyler-cowen/)
I haven’t yet read a really good critique of meritocracy. There’s plenty you can say against it. But, as with Churchill on democracy, it seems that all of the other systems are worse. I would stress the point that no meritocracy is ever quite presented as such, that in all social systems there are cushions and pillows for people’s egos and a true meritocracy where everyone knew their exact worth would, in fact, be psychologically intolerable. But, we’re also incapable of producing that. So it’s really about trying to have a system that actually rewards merit while not forcing people to quite face up to the fact so explicitly. And, it seems to me we have not gone too far with meritocracy, properly understood.
Nick: You often describe your role in Emergent Ventures as searching for talent. What’s an underrated signifier of talent?
Tyler: I think IQ is probably overrated. Conscientiousness is hard to measure. Early on, I would say stamina, the ability to just stick with things and be sturdy and keep on at it. And, that’s hard to spot, but you want to look for stamina in people.
Elena Ferrante, the four volume Neapolitan Quadrology and Knausgard’s My Struggle, volumes 1 or 2. Those, to me, stand up with the greatest novels of the 21st century.
the idea that \[...\] to make progress you have to give up everything you hold dear. I find that unsettling. I hope it’s not true.
In the book, you argue that growth is good because a wealthier society is better off and is better able to realize a more pluralistic set of values than a poorer society. If it turned out that the wealthiest members of society were able to capture all the gains from growth, would your argument cease to be correct?
Tyler: That’s correct. It would cease to be correct if that were true. Fortunately, it’s not true. So, I might be happier than Bill Gates.
Nick: I found Appendix B of the book a bit difficult where you discuss Derek Parfit’s repugnant conclusion and animal welfare. You talk about how human flourishing is, in your metaphor, a Crusonia plant, in that it is self-perpetuating and can grow indefinitely. In the appendix, you argue that we can’t directly compare it to other Crusonia plants like the Crusonia plant of Parfit’s repugnant conclusion or of the welfare of animals. But if we’re not able to aggregate utilities and compare between metaphorical Crusonia Plants, why even be utilitarian at all?
Tyler: Well I’m not a utilitarian per say. I would say I’m a consequentialist but there’s a relativistic element to my consequentialism. So questions like, “How many happy plants are worth the life of one baby?” — Maybe there can never be enough. But, I suspect the question just isn’t well-defined. How many dogs should die rather than one human being? I don’t even know what the units are. So, I think the utilitarian part of consequentialism only makes sense within frameworks where there’s enough commonality to compare wellbeing.
Nick: Finally, you’ve often said that most political disputes are really disputes about who gets status. Nominate a few things or people to which we should give more status?
Tyler: Everyone. Everyone pretty much deserves more status (not Hitler, not mass murderers) but most things are underappreciated and they’re criticized and praise motivates people and helps them have a sense of fitting in and to go around and appreciate and express your appreciation for what you really value, that’s one of the best things you can do with your life.
## Cato Unbound discussion
[https://www.cato-unbound.org/2019/01/17/tyler-cowen/radicalism-stubborn-attachments](https://www.cato-unbound.org/2019/01/17/tyler-cowen/radicalism-stubborn-attachments)
https://www.cato-unbound.org/2019/01/18/agnes-callard/when-economics-fails
[https://www.cato-unbound.org/2019/01/22/tyler-cowen/we-have-lost-our-way-fundamental-manner](https://www.cato-unbound.org/2019/01/22/tyler-cowen/we-have-lost-our-way-fundamental-manner)
[https://www.cato-unbound.org/2019/01/23/agnes-callard/radicalism-replaceability-bounded-obligations](https://www.cato-unbound.org/2019/01/23/agnes-callard/radicalism-replaceability-bounded-obligations)
But what of Tyler’s argument that “losing an irreplaceable civilization is a much greater tragedy than losing a civilization in a way which allows for the birth of a new and different one in its place. Replaceability therefore seems to count for something, even if we do not agree for how much”? (p.86) The inference here (“therefore”) is fallacious. One can grant that it is good if another civilization (or human being) comes to be after one passes away, better than if one does not, without thinking that the second constitutes any kind of replacement for the first. A new child in no sense at all substitutes for a recently dead loved one. Marginalism is out of its depth here, because not all value concepts are hospitable to the idea of trade or substitution or replacement. The most fundamental principles of human life lie outside the scope of economic thought, and are instead situated in philosophy.
\[ tyler replying to Agnes \]
When it comes to valuing lives, different lives are either commensurable or they are not, again as I discuss in the book. If they are not, nobody is going to produce meaningful rankings of different social states of affairs, not even by summoning up the mysterious ghost of “philosophy.” If human lives are commensurable in some way, we are back to sustainable compounding growth as giving us a decisive answer. When it comes to real world policy, we must indeed choose, and it is simply punting to claim there is no basis for comparison or trade-offs. Economics and the logic of social choice return, whether we like it or not.
Tyler challenges me: if we don’t compare the value of human lives, how can we possibly decide how much medical care the government should provide? I respond: There are many ways! Philosophy teaches that such a comparison is but one way to make a decision. Since this isn’t an essay in systematic philosophy, I won’t aim for coverage of the field, but instead focus on one plausible contender: the concept of bounded obligation.
I do not find myself stymied by the demand to make intelligent, well-grounded decisions about how much educational care to provide, even given my refusal to make invidious comparisons about the comparative value of my students’ minds.
## Tyler's TED talk: Be suspicious of stories
"This guy Tyler Cowen came (Laughter) and he told us not to think in terms of stories, but all he could do was tell us stories(Laughter) about how other people think too much in terms of stories." (Laughter) So, today, which is it? Is it like quest, rebirth, tragedy? Or maybe some combination of the three? I'm really not sure, and I'm not here to tell you to burn your DVD player and throw out your Tolstoy. To think in terms of stories is fundamentally human. There is a Gabriel Garcia Marquez memoir "Living to Tell the Tale" that we use memory in stories to make sense of what we've done, to give meaning to our lives, to establish connections with other people.None of this will go away, should go away, or can go away. But again, as an economist, I'm thinking about life on the margin, the extra decision. Should we think more in terms of stories, or less in terms of stories? When we hear stories, should we be more suspicious? And what kind of stories should we be suspicious of? Again, I'm telling you it's the stories, very often, that you like the most, that you find the most rewarding, the most inspiring. The stories that don't focus on opportunity cost, or the complex, unintended consequences of human action, because that very often does not make for a good story. So often a story is a story of triumph, a story of struggle; there are opposing forces, which are either evil or ignorant; there is a person on a quest, someone making a voyage, and a stranger coming to town. And those are your categories, but don't let them make you too happy. (Laughter) As an alternative, at the margin - again, no burning of Tolstoy - but just be a little more messy. If I actually had to live those journeys, and quests, and battles, that would be so oppressive to me! It's like, my goodness, can't I just have my life in its messy, ordinary - I hesitate to use the word - glory but that it's fun for me? Do I really have to follow some kind of narrative? Can't I just live? So be more comfortable with messy. Be more comfortable with agnostic, and I mean this about the things that make you feel good. It's so easy to pick out a few areas to be agnostic in, and then feel good about it, like, "I am agnostic about religion, or politics." It's a kind of portfolio move you make to be more dogmatic elsewhere, right? (Laughter) Sometimes, the most intellectually trustworthy people are the ones who pick one area, and they are totally dogmatic in that, so pig-headedly unreasonable, that you think, "How can they possibly believe that?" But it soaks up their stubbornness, and then, on other things, they can be pretty open-minded.So don't fall into the trap of thinking because you're agnostic on some things, that you're being fundamentally reasonable about your self-deception, your stories, and your open-mindedness. (Laughter) \[Think about\] this idea of hovering, of epistemological hovering, and messiness, and incompleteness, \[and how\] not everything ties up into a neat bow, and you're really not on a journey here. You're here for some messy reason or reasons, and maybe you don't know what it is, and maybe I don't know what it is, but anyway, I'm happy to be invited, and thank you all for listening. (Laughter) (Applause)
https://www.ted.com/talks/tyler\_cowen\_be\_suspicious\_of\_stories/transcript
People don't like it they like to push that stuff away keep things neat and easy to deal with. What I call the philosophy of once and for all ism. They want to be done with stuff once and for all. But that rarely works. \[...\] It's a mess and that's ok.
## https://notunreasonable.buzzsprout.com/126848/814311-tyler-cowen-on-stubborn-attachments-tyrone-and-multiple-perspectives
# Stubborn Attachments
## Takeaways
Capitalist economics has been incredibly successful in generating wealth and improving the human condition. We forget how overwhelmingly positive the effects of economic growth have been.
Crusonia plant - something that sustains compound growth in a positive sum manner.
5% compound growth over a long period is incredibly much better than 3% or 1%.
## Selected Highlights
Wealth Plus: “The total amount of value produced over some time period. This includes traditional measures of economic value, as would be found in GDP statistics, but also measures of leisure time, household production, and environmental amenities, as summed up in a relevant measure of wealth.”
We already can see that three key questions should be elevated in their political and also philosophical importance, namely: 1\. What can we do to boost the rate of economic growth? 2. What can we do to make our civilization more stable?, and 3. How should we best deal with environmental problems? The first of these is more commonly considered a “right wing” or perhaps libertarian concern, the second is most commonly a “conservative” preoccupation, and the third is, especially in the United States, most commonly associated with “left wing” perspectives. Yet these questions should be central, rather than peripheral, for everybody.
loc. 268-273
We often forget how overwhelmingly positive the effects of economic growth have been.
loc. 281-282
First: The Principle of Growth: “We should make political choices so as to maximize the rate of sustainable economic growth, as defined by Wealth Plus.”
“We should push for sustainable economic growth, but not at the expense of inviolable human rights.”
What makes a growth-maximization rule compelling is its attachment to a Crusonia plant, namely very large ongoing and indeed compounding gains in human welfare. Some rules, such as “Never lie,” face embarrassing counterexamples if lying can bring significant practical benefits in particular instances. But a rule of “maximize the rate of sustainable economic growth” does not face a comparable problem. By the very definition of such a rule, it is telling us to follow outcomes with a preponderance of benefits over costs. So practical costs may overturn or modify some rules, but they will not limit The Principle of Growth, which will be limited only by absolute or near-absolute human rights.
loc. 654-659
we see plenty of evidence that history can matter over very long time spans. Therefore, any act which strengthens good institutions today has, in expected value terms, a causal stretch running centuries into the future. That again means that our choice of discount rate is of critical importance.
loc. 832-834
By the way, we can see the importance of faith to the overall argument. To fully grasp the import of doing the right thing, and the importance of creating wealth and strengthening institutions, we must look very deeply into the distant future. To be sure, as I have argued at length, this is a conclusion suggested by reason and not running against reason. But in the real world of actual human motivations, the application of abstract reason across such long time horizons is both rare and unconducive to getting people emotionally motivated to do the right thing. The actual attitudes required to induce an acceptance of such long time horizons are, in psychological terms, much closer to a kind of faith. We cannot see these very distant expected gains, but nonetheless we must believe in them and hold those beliefs close and dear to our hearts. In this sense we should strongly reject the modern secular tendency to claim that a good politics can or should be devoid of faith.
loc. 834-841
It can be argued very plausibly and I think correctly that we are obliged to help the poor more than we are doing now. But the correct approach to our cosmopolitan obligations does not lead to personal enslavement or massive redistribution of our personal wealth. Most of us should work hard, be creative, be loyal to our civilization, build healthy institutions, save for the future, contribute to an atmosphere of social trust, be critical when necessary, and love our families. Our strongest obligation is to contribute to sustainable economic growth and to support the general spread of civilization, rather than to engage in massive charitable redistribution in the narrower sense. In the longer run, greater economic growth, and a more stable civilization, will help the poor most of all.
loc. 943-948
A preoccupation with pursuing growth – or some modified version of the growth ideal – thus means a preoccupation with ideas, a preoccupation with cultivating human reason, and a preoccupation with the notion of that man should realize, perfect, and extend his nature as a generator of powerful ideas which can change the world.
loc. 1276-1279
The Principle of Roughness: “Some of our choice options will differ in complex ways. We might nonetheless, ex ante, make a reasoned judgment that they are roughly equal in value, and that we should be roughly indifferent across the two options. After making a small improvement to one of these choices, we still might be roughly indifferent to which option is better.”
loc. 1438-1441
We should choose the course that is most likely to be correct, but at the end of the day we are more likely wrong than right. Our particular views, in politics and elsewhere, should be no more certain than our assessments of how to play that roulette wheel. With this attitude, political posturing loses much of its fun and indeed it ought to be viewed as disreputable or perhaps even as a sign of our own personal delusions.
loc. 1497-1500
a. Policy should be more forward-looking and more concerned about the more distant future.
b. Governments should place a much higher priority on investment than is currently the case, whether that concerns the private sector or the public sector. Relative to what we should be doing, we are currently living in an investment drought.
c. Policy should be more concerned with economic growth, properly specified, and policy discussion should pay less heed to other values. And yes, your favorite value gets downgraded too. No exceptions, except of course for the semi-absolute human rights.
d. We should be more concerned with the fragility of our civilization.
e. The possibility of historical pessimism stands as a challenge to this entire approach, because in that case the future is dim no matter what and there may not be a more distant future to resolve the aggregation dilemmas involved in making decisions which affect so many diverse human beings.
f. At the margin we should be more charitable but we are not obliged to give away all of our wealth. We do have obligations to work hard, save, invest, and fulfill our human potential, and we should take these obligations very seriously. 99
g. We can embrace much of common sense morality, while knowing it is not inconsistent with a deeper ethical theory. Common sense morality also can be reconciled with many of the normative recommendations which fall out of a more impersonal and consequentialist framework.
i. When it comes to most “small” policies, affecting the present and the near-present only, we should be agnostic because we cannot overcome aggregation problems to render a defensible judgment. The main exceptions here are the small number of policies which benefit virtually everybody.
loc. 1584-1598
I therefore would like to be more suspicious of our little voice in favor of supreme short-run pragmatism. I wish to suggest that it is a vice, the thinking man's equivalent of the savage's short-run gratification. It is our latest version of how to feel good about ourselves, at the potential danger of, in modern terms, letting Rome burn. I suggest that we should instead turn our political energies to thinking about the long-run fortunes of our civilization. That means focusing on the future of freedom, wealth, science, and healthy, well-functioning institutions, governed by rules and rights.
loc. 1622-1626
## All highlights
I do not take the productive powers of economies for granted. Production could be much greater than it is today and our lives could be more splendid. Or if we make some big mistakes production could be much less and we could all be much poorer. This simple observation helps us put the idea of production at the center of moral theory, as without production value is problematic. For all of her failings, Ayn Rand is the one writer who has best understood the importance of production for moral theory, a point which she expressed enthusiastically at great length, albeit with numerous unfortunate caricatures. It is the work of capital, labor, and natural resources – driven by the creative individual mind -- which undergird the achievements of our civilization. Whether or not you agree with all of Rand’s political views, do not take the existence of wealth for granted.
loc. 148-154
---
It is no accident that religious people on average have much higher rates of fertility, or that they engage in so many long-term business and charitable projects for the future, as suggested long ago by Max Weber.
loc. 160-162
---
A Crusonia plant, measured in terms of its ability to produce apples, might grow five percent each year on net. At the same time, it looks like a modest apple tree, and it does not appear to resolve key ethical and political questions. 14 The Crusonia plant may sound unrealistic or a bit silly, but it’s a useful example for pinpointing the nature of our quest. The Crusonia plant is an example of a free lunch – at least a free lunch of apples -- once you have obtained it.
loc. 188-192
---
We could compare two plants in terms of various qualities, such as their color or their scent, but after a while the unceasing free yield of the Crusonia plant has to prove better. At some point the sheer accretion of value, from the ongoing growth of the Crusonia plant, dominates the comparison between the two plants. We thus have a principle of both ethics and prudence: when in doubt choose the Crusonia plant. When it comes to making tough decisions, we should try to identify which elements in the choice set resemble a Crusonia plant. If we could find some choices or policies which give rise to the equivalent of Crusonia plants, namely ongoing and self-sustaining surges in value, the case for those choices would be compelling. Furthermore, if it turned out that Crusonia plants were more common than is at first sight apparent, aggregation problems would be eased more generally.
loc. 197-203
---
The natural candidate for such a process is economic growth or some modified version of that concept. If sustainably positive-sum institutions exist, there may be Crusonia plants all over. As we’ll see, standard definitions of economic growth do not fully qualify as true Crusonia plants, in part because they ignore environmental sustainability and in part because they do not adequately value leisure time. Nonetheless if we think about economic growth a little more broadly, we will have a relevant Crusonia plant for making decisions.
loc. 204-208
---
Current gdp statistics have a bias towards what can be measured easily and relatively precisely, rather than focusing on what contributes to human welfare. With this in mind, I will define the concept of Wealth Plus: Wealth Plus: “The total amount of value produced over some time period. This includes traditional measures of economic value, as would be found in gdp statistics, but also measures of leisure time, household production, and environmental amenities, as summed up in a relevant measure of wealth.”
loc. 214-218
---
In terms of providing operational guidance in calculating Wealth Plus, the two best efforts I know are Jones and Klenow (2010) and Becker, Philipson, and Soares (2005). For a good piece which puts production and productivity at a central place in moral theory, see Stanczyk (2012).
loc. 222-224
---
Michael Shermer (2002) has compiled an informal database on civilizational survival. He catalogued sixty civilizations, including Sumeria, Mesopotamia, Babylonia, the eight dynasties of Egypt, six civilizations of Greece, the Roman Republic and Empire, various dynasties and republics of China, four periods in Africa, three in India, two in Japan, six in Central and South America, and six in modern Europe and America. He finds that the average civilization endured 402.6 years.
loc. 234-237
---
He also finds that decline comes more rapidly over time. Since the fall of Rome average duration of a civilization has been only 304.5 years.6
loc. 237-238
---
primitive warfare appears to have been at least as frequent, bloody, and arbitrary in its violent effects as modern warfare.8 Earlier societies were neither idyllic nor peaceful.
loc. 264-266
---
We already can see that three key questions should be elevated in their political and also philosophical importance, namely: 1. What can we do to boost the rate of economic growth? 2. What can we do to make our civilization more stable?, and 3. How should we best deal with environmental problems?
loc. 268-271
---
We already can see that three key questions should be elevated in their political and also philosophical importance, namely: 1. What can we do to boost the rate of economic growth? 2. What can we do to make our civilization more stable?, and 3. How should we best deal with environmental problems? The first of these is more commonly considered a “right wing” or perhaps libertarian concern, the second is most commonly a “conservative” preoccupation, and the third is, especially in the United States, most commonly associated with “left wing” perspectives. Yet these questions should be central, rather than peripheral, for everybody.
loc. 268-273
---
We often forget how overwhelmingly positive the effects of economic growth have been.
loc. 281-282
---
Economist Russ Roberts reports that he frequently polls journalists and asks them how much economic growth there has been since 1900. By Russ’s account the typical answer is that the standard of living has gone up by around fifty percent. In reality, the U.S. standard of living has gone up by a factor of five to seven -- estimated conservatively -- and possibly much more, depending on which techniques we use for measuring prices and the values of outputs over time, a highly inexact science.
loc. 282-285
---
The data show just how much living standards have gone up. In 1900 for instance almost half of all U.S. households (0.49) had more than one person per room and almost one-quarter (0.23) over 3.5 persons per sleeping room. Slightly less than one-quarter (0.24) of all U.S. households had running water, eighteen percent had refrigerators, twelve percent had gas or electric light, and today the figures for all of these stand at 99 percent or higher.
loc. 285-288
---
As recently as the end of the nineteenth century, life expectancy in Western Europe ran about forty years of age. Economist Robert Fogel (2004, pp.8, 9, 34) paints a grim picture of the European past: “...at that time \[eighteenth and early nineteenth centuries\] food constituted between 50 and 75 percent of the expenditures of laboring families...however...the energy value of the typical diet in France at the start of the eighteenth century was as low as that of Rwanda in 1965, the most malnourished nation for that year in the tables of the World Bank.
loc. 298-302
---
Immigrants also send remittances back home and at a rate which far exceeds governmental foreign aid. In all of these ways the actual upward mobility of the United States far exceeds what the usual numbers indicate, because published mobility numbers do not typically include a comparison with pre-immigration outcomes.
loc. 326-328
---
Although it belies a lot of the recent media coverage, which focuses only on “within nation” magnitudes, recent world history has been an extraordinarily egalitarian time. Most of all, it is a story of how global economic growth helps the poor. There has been a squeezing of the middle class in the wealthier nations, in part because of increasing global competition. Still, we have seen economic growth, aggregate wealth, and global income equality all rising together over the last twenty-five years.
loc. 352-356
---
The truth is that economic growth is the only permanent path out of squalor. Economic growth is how the Western world climbed out of the poverty of the year 1000 A.D. or 5000 B.C., it is how much of East Asia became remarkably prosperous, and it is how our living standards will improve in the future. Just as the present appears remarkable from the vantage point of the past, the future, at least provided growth continues, will offer comparable advances, including perhaps greater life expectancies, cures for debilitating diseases, and cognitive enhancements. Billions of people will have much better and longer lives. Many features of modern life might someday seem as backward as we now regard the large number of women who died in childbirth for lack of proper care.
loc. 376-381
---
A critical point is that the importance of the growth rate increases the further into the future we look. If a country grows at two percent, as opposed to growing at one percent, the difference in welfare in a single year is relatively small. But over time the difference becomes very large. For instance, had America grown one percentage point less per year, between 1870 and 1990, the America of 1990 would be no richer than the Mexico of 1990.15 At a growth rate of five percent per annum, it takes just over eighty years for a country to move from a per capita income of $500
loc. 391-395
---
A critical point is that the importance of the growth rate increases the further into the future we look. If a country grows at two percent, as opposed to growing at one percent, the difference in welfare in a single year is relatively small. But over time the difference becomes very large. For instance, had America grown one percentage point less per year, between 1870 and 1990, the America of 1990 would be no richer than the Mexico of 1990.15 At a growth rate of five percent per annum, it takes just over eighty years for a country to move from a per capita income of $500 to a per capita income of $25,000, defined in terms of constant real dollars. At a growth rate of one percent, such an improvement takes 393 years.
loc. 391-396
---
Robert E. Lucas (1988, p.5), Nobel Laureate in economics, put the point succinctly: “the consequences for human welfare involved in questions like these are staggering: once one starts to think about \[exponential growth\], it is hard to think about anything else.”
loc. 397-398
---
To give an example, if you ask the people of Kenya how happy they are with their health, you get a pretty high rate of reported satisfaction, not so different from the rate in the healthier countries and in fact higher than the reported rate of satisfaction from the United States. The correct conclusion is not that Kenyan hospitals have hidden virtues, or that malaria is absent in Kenya, but rather that Kenyans have recalibrated their use of language to reflect what they can reasonably expect from their daily experiences. In similar fashion, people in less happy situations or less happy societies often attach less ambitious meanings to the claim that they are happy. Evidence based on questionnaires therefore will underrate the happiness of people in wealthier countries.19
loc. 426-432
---
By its nature, happiness research draws upon a fixed pool of people in relatively normal circumstances. This will limit its ability to measure some of the largest benefits brought by economic growth and also by change more generally. If we want to be around to even have the option of answering happiness questionnaires, wealth is extremely important.
loc. 522-524
---
To sum this all up, if we take a broad enough and long enough comparison, we will find a lot of choices where aggregation problems are not all that serious, at least not cripplingly so. Given that somewhat cheering reality, I would like to define two principles for practical reasoning. First: The Principle of Growth: “We should make political choices so as to maximize the rate of sustainable economic growth, as defined by Wealth Plus.” The Principle of Growth would return economics back to its roots in Adam Smith. Smith held a straightforward, common-sense approach to political economy. He understood that the benefits of cumulative growth were significant, especially with the passage of time. It is no surprise that his economics treatise was entitled An Inquiry into the Nature and Causes of the Wealth of Nations. 38 By the way, I see only one episode in human history where the Principle of Growth was clearly and unambiguously applied, and that is in the East Asian economic miracles, including Japan, South Korea, Taiwan, Hong Kong, Singapore, and China (with a caveat here for sustainability), across the appropriate periods of time in each case. These histories are normally thought of as big economic successes, and of course they are, but I am saying more than that. They also represent the highest manifestation of the ethical good to date in human history. Whereas Hegel saw the 19th century Prussian state as a manifestation of God’s will in history, I am assigning a comparable (but secular) place of importance to the East Asian economic miracles. The word “miracle” truly does apply. I call the second principle the Modified Principle of Growth. The Modified Principle of Growth: “We should push for sustainable economic growth, but not at the expense of inviolable human rights.”
loc. 587-602
---
If we were willing to trade off these rights against a bundle of other plural values, at some sufficiently long time horizon the benefits from higher economic growth would trump the rights in importance and in essence the rights would cease to be relevant. In other words, the presence of Crusonia plants means that rights – if we are going to believe in them at all – have to be pretty tough and pretty close to absolute in importance if they are surviving as relevant to our comparisons.
loc. 604-607
---
What makes a growth-maximization rule compelling is its attachment to a Crusonia plant, namely very large ongoing and indeed compounding gains in human welfare. Some rules, such as “Never lie,” face embarrassing counterexamples if lying can bring significant practical benefits in particular instances. But a rule of “maximize the rate of sustainable economic growth” does not face a comparable problem. By the very definition of such a rule, it is telling us to follow outcomes with a preponderance of benefits over costs.
loc. 654-658
---
What makes a growth-maximization rule compelling is its attachment to a Crusonia plant, namely very large ongoing and indeed compounding gains in human welfare. Some rules, such as “Never lie,” face embarrassing counterexamples if lying can bring significant practical benefits in particular instances. But a rule of “maximize the rate of sustainable economic growth” does not face a comparable problem. By the very definition of such a rule, it is telling us to follow outcomes with a preponderance of benefits over costs. So practical costs may overturn or modify some rules, but they will not limit The Principle of Growth, which will be limited only by absolute or near-absolute human rights.
loc. 654-659
---
At the end of these arguments, we are led to a surprising conclusion. If the time horizon is sufficiently long, the only non-growth-related values that will bind practical decisions are the absolute side constraints, or the inviolable human rights. In other words, ethics will involve the dual ideas of prosperity and liberty at its center.
loc. 670-672
---
I’ll now look at the issue of time horizon in more detail. If the proper time horizon for our decisions is quite short, none of these arguments will succeed. I’m going to show why the time horizon matters so much and why we – in most but not all cases -- should think in terms of very long time horizons.
loc. 674-676
---
we have a brute, biological preference for the “now” but we will do better if we can get past it. We can do better if we can tap into that part of ourselves which realizes a benefit in twenty years’ time is about as valuable as that same benefit in thirty years’ time. That’s the kind of thinking we need to generalize more thoroughly.
loc. 696-699
---
Very commonly economists and other social scientists speak of a discount rate. A discount rate tells us how to compare future benefits to current benefits (or costs) when we make decisions.
loc. 714-716
---
Most of us are altruistic, especially toward our own children and grandchildren, but this form of partial altruism does not make us care much about other peoples’ grandkids. When people are voting or choosing for all future generations as a whole, they often behave quite selfishly.
loc. 729-731
---
If you own a Rembrandt painting, you’ll probably keep it in decent shape, even if you’re a selfish, culture-less bastard who doesn’t care about the artistic patrimony of the Dutch. These kinds of examples, however, apply only where there are well-defined property rights in specific assets. The motivations behind these behaviors won’t cause us to preserve a workable environment and they won’t cause us to maximize the rate of sustainable economic growth. Once again, the proper depth of concern for the more distant future does not come to us automatically, at least not in a wide variety of cases.
loc. 737-741
---
Expressed differently, for non-tradable and storable assets, markets do not reflect the preferences of currently unborn individuals. The part of economics known as “welfare economics” holds up perfect markets as a normative ideal, yet future generations cannot contract in today’s markets.
loc. 741-743
---
Under any positive discount rate, no matter how low, one life today can be worth more than one million lives in the future, or worth the entire subsequent survival of the human race, if we use a long enough time horizon for the comparison. At the very least, we should be skeptical that positive discount rates apply to every choice before us.
loc. 775-777
---
Time preference therefore does not justify the significant discounting of the distant future, even if it justifies Tom wanting a steak dinner sooner rather than later.35
loc. 785-786
---
In moral terms maybe time really is an illusion, as Buddha suggested thousands of years ago.
loc. 804-805
---
That said, discounting for risk is justified in a way that discounting for the pure passage of time is not justified. If a future benefit is uncertain, we should discount that benefit accordingly because it may not arrive. But such a practice does not dent a deep concern for the distant future.
loc. 805-807
---
we see plenty of evidence that history can matter over very long time spans. Therefore, any act which strengthens good institutions today has, in expected value terms, a causal stretch running centuries into the future. That again means that our choice of discount rate is of critical importance.
loc. 832-834
---
By the way, we can see the importance of faith to the overall argument. To fully grasp the import of doing the right thing, and the importance of creating wealth and strengthening institutions, we must look very deeply into the distant future. To be sure, as I have argued at length, this is a conclusion suggested by reason and not running against reason. But in the real world of actual human motivations, the application of abstract reason across such long time horizons is both rare and unconducive to getting people emotionally motivated to do the right thing. The actual attitudes required to induce an acceptance of such long time horizons are, in psychological terms, much closer to a kind of faith. We cannot see these very distant expected gains, but nonetheless we must believe in them and hold those beliefs close and dear to our hearts. In this sense we should strongly reject the modern secular tendency to claim that a good politics can or should be devoid of faith.
loc. 834-841
---
No matter how bright the future may seem, it’s still going to yield an ongoing stream of human tragedies. The way to minimize those tragedies, again, is to maximize sustainable growth. Rather than opting for a strictly zero discount rate, I suggest a more modest postulate, to which I already have referred but now will label formally, namely Deep Concern for the Distant Future. In this view, we should not count catastrophic losses for much less, simply because those losses are temporally distant. In the absence of qualifying factors, no amount of temporal distance per se should cause major widespread tragedies to dwindle into current insignificance.
loc. 900-905
---
At this point most people balk and search for some moral principle which limits our obligations to the very poor. One problem is that the needs of the suffering are so enormous that few able or wealthy individuals would be able to carry out individual life projects of their own choosing. They would instead become a kind of utility slave, serving only the interests of others and giving themselves just enough food and fuel to keep going. The result is that utilitarianism, or for that matter many forms of consequentialism, often is seen as an excessively demanding moral philosophy.
loc. 931-935
---
It can be argued very plausibly and I think correctly that we are obliged to help the poor more than we are doing now. But the correct approach to our cosmopolitan obligations does not lead to personal enslavement or massive redistribution of our personal wealth. Most of us should work hard, be creative, be loyal to our civilization, build healthy institutions, save for the future, contribute to an atmosphere of social trust, be critical when necessary, and love our families. Our strongest obligation is to contribute to sustainable economic growth and to support the general spread of civilization, rather than to engage in massive charitable redistribution in the narrower sense. In the longer run, greater economic growth, and a more stable civilization, will help the poor most of all.
loc. 943-948
---
we should redistribute only up to the point which maximizes the rate of sustainable economic growth.
loc. 949-950
---
terms, a larger welfare state will make society less willing to take in many immigrants because they resent being taxed to pay for them. Yet immigration is by far the most effective anti-poverty program that has been discovered. So even if a specified set of welfare expenditures brings some growth benefits, alternative investments may do more for human welfare.
loc. 975-978
---
rather than redistributing most wealth, we would reap greater utilitarian benefits by investing it in high-return activities,
loc. 983-984
---
Even when utilitarianism and common sense recommend the same courses of behavior, they do so for different reasons. Utilitarianism tells us we should work, save, and innovate to serve the purposes of others, including future generations. Common sense morality tells us that we should work and save to take care of our families and because we value our own lives. These two perspectives remain different in their methods and their justifications.
loc. 996-999
---
If historical pessimism is true, as suggested by many old school conservatives, expected rates of return are negative, there are no long-lasting Crusonia plants, and my arguments, even if they hold up logically, do not very much apply.
loc. 1032-1034
---
Some people should make sacrifices to help out but, because we must keep economically advanced civilization up and running, not everyone should make such sacrifices. Arguably this is the paradigmatic payoff structure to address questions related to global poverty, sacrifice, and obligation.
loc. 1073-1075
---
the rich earn higher returns on their accumulated wealth, as indeed has been stressed by the French economist Thomas Piketty. If there is a trickle-down effect from the wealth of the wealthy, combined with a zero rate of discount, it is easy to generate scenarios where utilitarianism recommends redistribution to the wealthy.
loc. 1105-1107
---
we can think of the elderly as individuals who are poor in one particular dimension, namely in their future human capital. The elderly are more likely to die soon than are the young. And while we should do a good deal to help the elderly, the logic of sustainable growth places limits on these obligations too.
loc. 1134-1136
---
We probably do not know the exact correct valuation of an individual life, but we do know that the possibility of commensurability, the pull of the more distant future, the ongoing replenishment of human civilization, and the value of investing in future lives – when considered all together – exert some downward pressure on how much we should invest to extend the lives 72 of the elderly today.
loc. 1164-1167
---
To put it most concretely, today in the United States we are spending too much on the elderly and not enough on the young. And given that the elderly are the ones who vote with greatest frequency, and the young often do not or cannot, that mistake should hardly come as a surprise.
loc. 1170-1172
---
When the rate of intergenerational discount is sufficiently low, maximizing the growth rate takes priority over avoiding one-time expenditures and one-time adjustments.
loc. 1193-1194
---
When the rate of intergenerational discount is sufficiently low, maximizing the growth rate takes priority over avoiding one-time expenditures and one-time adjustments. Even if those one-time expenditures are large, we will earn back that value over time, due to the logic of compounding growth. So if we are approaching climate change as a serious problem, we should pay greater heed to the storms and lesser heed to the relocation costs, compared to what other frameworks will suggest.
loc. 1193-1196
---
Individuals who believe in increasing returns models should be much more skeptical of non-growth-enhancing redistribution than individuals who believe in the Solow catch-up model.
loc. 1256-1258
---
both the Solow and Romer models emphasize ideas as the wellspring of economic growth.
loc. 1275-1276
---
A preoccupation with pursuing growth – or some modified version of the growth ideal – thus means a preoccupation with ideas, a preoccupation with cultivating human reason, and a preoccupation with the notion of that man should realize, perfect, and extend his nature as a generator of powerful ideas which can change the world.
loc. 1276-1279
---
It’s pretty simple. If you can somehow manage to go back in time, and alter one small event, the entire history of the world can change. One extra sneeze from one caveman, millennia ago, probably would overturn everything we know. Ray Bradbury’s short story “A Sound of Thunder,” published in 1952, is one of the early sources for this idea. It seems a little crazy, but the more you think about it the more it seems to hold.
loc. 1286-1289
---
The key point is that small changes usually turn into big changes.
loc. 1289-1289
---
Even if most people "don't much matter" for broader aggregate outcomes, it sure seems that some do, such as Jesus or Hitler or Lenin or Chairman Mao. For instance, without Hitler Nazism probably would not have succeeded or had the same impact on the world stage. The Second World War, as we know it, would not have happened, nor would have the Holocaust occurred. Virtually every country’s subsequent history would have been different and we would end up with a quite different world history for the rest of humanity’s time on earth.
loc. 1292-1296
---
If Hitler's great-great-grandmother had bent down to pick one more daisy, thereby postponing her next act of intercourse ever so slightly, many of the effects of that delay might have washed out but Europe today would be a very different place.
loc. 1310-1312
---
If you ponder these time travel conundrums enough, you’ll realize that the effects of our current actions are very hard to predict, and that has nothing to do with whether time travel ever comes to pass.
loc. 1320-1321
---
The epistemic critique suggests consequentialism cannot be a useful guide to action because we hardly know anything about long-run consequences. While it is true we can calculate expected value, such a calculation is typically based on a very limited range of information about present consequences or consequences in the near future. Like many philosophers, I wonder if we have the correct moral theory in the first place, if we cannot know ninety percent, or perhaps 99.9 percent, of what is to count toward a good outcome.
loc. 1322-1325
---
To put it simply, it is difficult to see the violent destruction of Manhattan as on net, in ex ante terms, favoring the long-term prospects of the world. We can imagine scenarios where the destruction of Manhattan works out for the better ex post; perhaps, for instance, the explosion leads to a powerful anti-proliferation movement, which turns out to save the entire world in some longer run.
loc. 1365-1368
---
At some point we can find a set of consequences so significant that we would be spurred to action, without much epistemic reluctance, even though we would be recognizing the broader uncertainties of the very long run. Surely at some point the upfront benefit must be large enough to persuade us to pursue it. We can debate “how large” an upfront event is needed to sway us toward making an actual evaluation and recommendation, but a large enough upfront event should suffice.54
loc. 1376-1380
---
We therefore can avoid complete paralysis or sheer and absolute agnosticism, at least for some of our choices. No matter how high the uncertainty surrounding our estimates of subsequent consequences, we can take some actions to favor good consequences in the short run. It is only necessary that those short-run good consequences are of sufficiently large and obvious value. We can recognize the subsequent radical uncertainty, but still the upfront benefit can be large
loc. 1380-1383
---
Lenman (2000) appears to favor "ethical theories for which the focus is on the character of agents and the qualities of their wills, for theories that are broadly Kantian or Aristotelian in spirit."
loc. 1408-1409
---
course of action is sufficiently high, the epistemic critique has less force, though we remain uncertain as to whether we will choose the correct beach for defeating Hitler. We remain uncertain about the long-run “remixing” effects of our choice. Still, we must pursue large benefits when we can – like the one hundred lives – at least provided there is no good reason not to.
loc. 1432-1434
---
The Principle of Roughness: “Some of our choice options will differ in complex ways. We might nonetheless, ex ante, make a reasoned judgment that they are roughly equal in value, and that we should be roughly indifferent across the two options. After making a small improvement to one of these choices, we still might be roughly indifferent to which option is better.”
loc. 1438-1441
---
ex ante,
loc. 1439-1439
---
We often resort to The Principle of Roughness in aesthetics. Assume we are trying to judge whether Beethoven or Mozart is the better composer. We might judge the two composers as 89 being roughly equal, or judge that neither composer can be elevated over the other. Assume then that we discover one new work by Beethoven, a lovely two-minute bagatelle for piano. We are not now obliged to assert that Beethoven is the better composer. Our original judgment of equality was sufficiently “rough” that it can survive this new discovery. In contrast, a very exact comparison of equality, such as that of weight or length, could be upset by a small change at the appropriate margin of measurement. For this reason, The Principle of Roughness seems especially likely to apply to aesthetic comparisons.56
loc. 1442-1447
---
In most applications of The Principle of Roughness (e.g., Mozart vs. Beethoven), small changes (e.g., discovery of an extra sonata) are swamped by high absolute totals (of achievement) in the first place. In the D-Day example, the small change – the dog’s leg -- is swamped by the high variance in our estimates of consequences. In other words, the epistemic critique extends one version of the Principle of Roughness to comparisons involving uncertainty.
loc. 1462-1465
---
To borrow a metaphor, anything we try to do today is “floating in a sea of long-run radical uncertainty.” Only big, important goals will, in reflective equilibrium, stand above the ever-present froth and allow the comparison to be anything more than a rough one. When small goals are at stake, our moral intuitions become confused -- properly or not -- and as a result we downgrade the importance of those small goals. If there is any victim of the epistemic critique, it is focusing on small benefits and costs, but not consequentialism more generally. If we bundle appropriately and “think big” and pursue Crusonia plants, our moral intuitions will rise above the froth of long-run variance.
loc. 1468-1473
---
Given the radical uncertainty of the more distant future, we do not usually have a good idea how to achieve our preferred goals over longer time horizons. Our attachment to particular means therefore should be highly tentative, highly uncertain, and radically contingent. Our specific policy recommendations, though we believe them to be the best available, will stand only a slight chance of being correct. They ought to stand the highest chance of being correct, of all available views, but this chance will not be very high in absolute terms. We should think of the details of our political views as analogous to betting on a slightly crooked roulette wheel, designed to land on the number seven more than a proportionate amount of the time. We should bet on the slightly favored outcome, namely the number seven, and by doing so we improve our prospects. But most of the time we are likely to predict the wrong number, as we will be betting on seven and some other number will come up. Our political stances and policy recommendations should be accordingly tolerant. Imagine a world where your chance of being right is two percent, and your chance of being wrong is ninety-eight percent.
loc. 1483-1492
---
We should choose the course that is most likely to be correct, but at the end of the day we are more likely wrong than right. Our particular views, in politics and elsewhere, should be no more certain than our assessments of how to play that roulette wheel. With this attitude, political posturing loses much of its fun and indeed it ought to be viewed as disreputable or perhaps even as a sign of our own personal delusions.
loc. 1497-1500
---
Let us consider, for instance, the right of an innocent baby not to be murdered. Let’s say you believe in such a right, as I do, and then you are presented with a counterexample where killing that innocent baby will, in the short run, raise national income by $5 billion. Normally, economists would value a life at much less than $5 billion, typically in the neighborhood of about $5 million, which is a big difference. Yet in this instance it is wrong to set up the comparison of “baby’s life vs. $5 billion” and then have to choose. The correct comparison is “baby’s life vs. a froth of massive uncertainty with a gain of $5 billion tossed in as one element of that froth.” When it is phrased that way, it is easier to side with preventing the murder of the baby. There is even a good chance – albeit a less than fifty percent chance – that stopping the murder of the baby will be good for gdp too.
loc. 1502-1509
---
“lifeboat ethics” might differ from our more usual and more practical ethical recommendations. I define lifeboat ethics as the ethics which should hold as the end of the world – or the end of some sufficiently segmented part of the world -- approaches. People in a (not-to-be-rescued) lifeboat cannot look forward to great improvements in their future welfare or much economic growth. The sharks are circling and they expect their supplies of food and water to run out. By construction of the example, the lifeboat is not connected to the broader froth of uncertainty in the world at large. So what does that mean? In lifeboat settings, the benefits at stake typically will be small precisely because lifeboats, even the relatively large ones, are small. Rights therefore acquire greater force in relative terms. No matter what you do, you can’t produce large social benefits in lifeboat examples, and so there is a stronger case for simply doing the right thing.
loc. 1535-1542
---
First, believing in the overriding importance of sustained economic growth is more than philosophically tenable and it may be philosophically imperative. We should pursue large rather than small benefits and we should have a deep concern for the more distant future, rather than discounting it exponentially. Our working standard for evaluating choices should be to increase sustainable economic growth, because those choices overcome aggregation problems and they are decisively good. That provides us with a broad quantitative proxy for the long-run development of human civilization, and it constitutes one means of finding and promoting comoving plural values. Second, there is plenty of room for our morality, including our political morality, to be strict and based in the notion of rules and rights. We should subject ourselves to the constraint of respecting human rights, noting that only semi-absolute human rights will be strong enough to place any constraint on pursuing the benefits of a higher rate of sustainable economic growth. 97 At the end of that tunnel we have not “The Best Ethical Theory,” as a philosopher might wish to derive, but rather some good decision rules to live by and also some standards for how we might imagine a much brighter future.
loc. 1552-1562
---
once we are thinking naturally in terms of big, packaged changes, a belief in rights fits in quite naturally. We have some rules for what to do – maximize sustainable growth – and other rules – rights -- which place some constraints on those choices. In other words, the lower-order rules stand within some higher-order rules, namely respecting the rights. Across the entire map we should stick to our chosen priorities, and our chosen rules, rigidly.
loc. 1562-1566
---
a. Policy should be more forward-looking and more concerned about the more distant future. b. Governments should place a much higher priority on investment than is currently the case, whether that concerns the private sector or the public sector. Relative to what we should be doing, we are currently living in an investment drought. c. Policy should be more concerned with economic growth, properly specified, and policy discussion should pay less heed to other values. And yes, your favorite value gets downgraded too. No exceptions, except of course for the semi-absolute human rights. d. We should be more concerned with the fragility of our civilization. e. The possibility of historical pessimism stands as a challenge to this entire approach, because in that case the future is dim no matter what and there may not be a more distant future to resolve the aggregation dilemmas involved in making decisions which affect so many diverse human beings. f. At the margin we should be more charitable but we are not obliged to give away all of our wealth. We do have obligations to work hard, save, invest, and fulfill our human potential, and we should take these obligations very seriously. 99 g. We can embrace much of common sense morality, while knowing it is not inconsistent with a deeper ethical theory. Common sense morality also can be reconciled with many of the normative recommendations which fall out of a more impersonal and consequentialist framework. i. When it comes to most “small” policies, affecting the present and the near-present only, we should be agnostic because we cannot overcome aggregation problems to render a defensible judgment. The main exceptions here are the small number of policies which benefit virtually everybody.
loc. 1584-1598
---
I therefore would like to be more suspicious of our little voice in favor of supreme short-run pragmatism. I wish to suggest that it is a vice, the thinking man's equivalent of the savage's short-run gratification. It is our latest version of how to feel good about ourselves, at the potential danger of, in modern terms, letting Rome burn. I suggest that we should instead turn our political energies to thinking about the long-run fortunes of our civilization. That means focusing on the future of freedom, wealth, science, and healthy, well-functioning institutions, governed by rules and rights.
loc. 1622-1626
---
it is surprisingly difficult to find a welfare algorithm that avoids the endorsement of sheer numbers per se. Many of the attempts to cut off an endorsement of the second scenario fall prey to further philosophic counterexamples.65
loc. 1743-1745
---
Parfit's statement of the Repugnant Conclusion reads as follows: "The Repugnant Conclusion. For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population, whose existence, if other things were equal, would be better, even though its members have lives that are barely worth living."66
loc. 1745-1748
---