# Tyler Cowen

Tyler Cowen thinks that philosophers focus too much on questions of how to distribute wealth, and not enough on how to create it.

Here’s Tyler describing Stubborn Attachments, his most philosophical book:

People always think they’re more right on average than they are. This is true of everyone. If it’s true of everyone it has to be true of me, so I wanted to build a set of arguments that in some way were robust to me being wrong most of the time, and that’s hard to do. If you’re wrong most of the time, your arguments are wrong most of the time. But is there some meta-level where there’s a claim you can make that is taking that into account in some way.

Tyler’s framework starts with the idea that wealth is a necessary, nearly sufficient condition for realising the plurality of values people care about [1]. If you increase the long run sustainable rate of growth in wealth, compounding effects will generate very large improvements in average wealth a few decades down the line. This means that the positive consequences of actions that improve the sustainable growth rate dwarf those of nearly all actions that don’t improve the growth rate.

With this in mind, it seems like this imperative is an underrated place to start:

Maximise the rate of sustainable economic growth.

We should make some adjustments away from this (e.g. we can maintain an almost absolute commitment to basic human rights, e.g. “don’t murder”, “don’t torture”) but we should keep in mind how costly such deviations are for future people. We shouldn’t casually throw in our favourite causes, or some feel-good concessions to the here and now. According to Tyler, it really is just basic human rights, and, well… it’s not clear what else might be above the bar.

One apparent virtue of the framework is that it relaxes the need to agree on what is ultimately valuable before we collaborate. On a sufficiently long timescale, the “rising tide lifts all boats” effect means the world should get radically better on most perspectives [5].

Taking a longterm view like this is unsettling insofar as it conflicts with the intuition that we should focus on helping people in the here and now (e.g. by re-distributing wealth for reasons other than maximising the growth rate). Tyler recognises this, but he thinks the burden of proof is on those who want to say that future people matter less than current people, whether in theory or in practice. We should be haunted by tragic tradeoffs, but we should not think trades that benefit current people at the expense of future people can be justified by strength of feeling alone [2]. The right decision is not always the currently popular decision. Indeed, while the interests of today’s poor are often badly represented by our political systems, the interests of future people seem even more likely to be systemically neglected—future people cannot complain, because physics.

Stubborn Attachments puts less emphasis on sustainability than other long-term thinkers like Nick Bostrom, Derek Parfit, Richard Posner, Martin Rees and Toby Ord. On the 80,000 Hours podcast, Tyler explained that existential risk was much more prominent in early drafts of the book, but he decided to de-emphasise it after Posner and others began writing on the topic. In any case, Tyler agrees with the claim that we should put more resources into reducing existential risk at current margins. However, it seems as though he, like Peter Thiel, sees the political risk of economic stagnation as a more immediate and existential concern than these other long-term thinkers. Speaking at one of the first effective altruism conferences, Thiel said if the rich world continues on a path of stagnation, it’s a one-way path to apocalypse. If we start innovating again, we at least have a chance of getting through, despite the grave risk of finding a black ball.

Tyler may also have a different view about what messages should be blasted into the public sphere. Perhaps this is partly due to a Deutsch / Thiel-style worry about the costs of cultural pessimism about technology. Martin Rees claims that democratic politicians are hard to influence unless you first create a popular concern—my guess is that either Tyler thinks the politicians aren’t the centre of leverage for this issue, or that there are more direct ways to influence them. In any case, it’s clear Tyler thinks that most people should focus on maximising the (sustainable) growth rate, and only a minority should focus on existential safety. Some perspectives find this counterintuitive, but on the Cowen/Thiel picture, it’s consistent to say that growth is too slow and that sustainability is underrated.

One other thing: Tyler sometimes says he thinks that human life on earth has only centuries (not millennia) ahead of it—and sometimes justifies this by saying that we’ll surely destroy ourselves with nuclear weapons within a few centuries. I wonder if he also thinks that in the event that our descendants do make it through this millennia, they will be such different beings that they won’t count as “us” in any sense we should care about. In any case, he seems to think it’s not worth betting very heavily on the small probability we make it to the stars, though he does support putting more resources towards such projects.

I’d like to understand more about Tyler’s views on the prospect of transformative AI, and I also want to hear more of his thoughts on Nietzsche and Bernard Williams, both of whom barely get a mention on his blog.

This entry has focussed on his worldview, but Tyler’s “real world” influence over the past decade or two has been huge. To take one example: my guess is that Tyler’s efforts against COVID-19 have saved thousands of lives and millions of dollars—by speeding up information transmission via his blog, and speeding up grantmaking via Fast Grants. I would not be shocked if the true figures were millions and billions, respectively, though he does not work alone, and doling out counterfactual credits is hard.

Tyler’s interviewing style is widely acclaimed. A master educator, his trademark “underrated or overrated” segment is about teaching us how to think: the fertility of marginal thinking, the difficulty of appropriate calibration, the value of less-than-timeless truths [6].

I hope to write more on Tyler’s work soon. For now, I’ll close with a youthful brag: Tyler started his interview with Annie Duke using a question I suggested. Given how much I admire him, this made me unreasonably happy.

Places to start:

Follow Tyler:

[1] I’m fuzzy on what Tyler means by “wealth” and what the relationship between “wealth” and “intrinsic value” is supposed to be. @TODO

[2] I’ve not read strong consequentialist arguments for a non-zero discount rate for welfare. Rather remarkably, this seems like an area of moral philosophy that is pretty settled [3]. It is hard, to paraphrase Rob Wiblin, to seem sensible if you equivocate between a single ancient Egyptian and millions of people alive today. If one wanted to fudge in some more weighting for the current generation, one option would be to flesh out the deontological constraints. Tyler is deliberately vague on where our bar should be, e.g. he says we should spend less on the elderly, but does not give detail.

[3] Hmm… this is actually so surprising … am I missing something here? IIRC GPI has looked into this and there was that Mogensen paper, but I’m fairly sure that was a non-consequentialist argument @TODO. What happens if one takes a broader definition of “serious moral philosophy”? I guess the best candidate might be the “rational arguments are just rationalisations of power” crowd.

[4] I am keen to figure out why. A few ideas:

  • Distrust of Pascal’s Wager type arguments.
  • Some of his “cone of value” and “incommensurability” comments suggest he thinks that we can’t intelligibly talk about the value of a world containing our posthuman descendants.
  • There’s actually not a substantive disagreement, just a disagreement about what it’s useful to say.

[5] What about for people who are replaced by machines? Or are very strongly attached to their current way of life, which becomes unviable? @TODO

[6] One cool thing about “thinking on the margin” is that it gives the philosophically inclined an important reminder: much of the time, you actually don’t need to know exactly where you want to end up, just which direction seems robustly good to move in.

Last updated: April 2021