Edit 2022-12: This important note is a mess, sorry.
I just put a couple quotes up at [two-thirds-utilitarian.com/](https://two-thirds-utilitarian.com/). That's the place to start.
---
> Constraints, options and special obligations—maybe they have a utilitarian reduction, maybe they don't. I'm more married to them than I am to the idea of utilitarianism per se. Let's respect these things in practice but I'm still really into the idea of doing as much good as possible with a big part of my life and I think utilitarianism is the most productive framework for that. I mean I wouldn't even sign up for "always do the utilitarian thing" in that setting, but I would sign up for like when you're trying to figure out what to do in that setting, try a hand at the utilitarian calculus and see where it gets you, and let's have that be our first cut. And sometimes things might go in too crazy a direction and you're not going to endorse it.
> —Nick Beckstead ([in conversation with Spencer Greenberg](https://clearerthinkingpodcast.com/?ep=042)).
Beckstead: I think there was a point in my life when I was like a diehard utilitarian, and I was like this is the way that things should be done. I think over time I have kind of backed off of that a little bit and I now have a more circumscribed kind of claim, that's kind of like **I can articulate some conditions under which I think a type of utilitarian reasoning is roughly right for a certain purpose**. If I was going to put a quick gloss on it, it would be like actions that are conventionally regarded as acceptable and you are happy to do them. There are some times in your life where basically what you are trying to do is to help people [sentient beings] and to help them impartially […] and you're trying to do good, you're trying to do it impartially and [...] there is no temptation to do anything sketchy with that, you're acting fully within your rights according to any common sense conception of how things are, and you're happy to do it, let's say it's a sacrifice you are happy to make, say it involves giving some money or spending some time. And I think [in circumstances such as these], as a first cut utilitarianism can be your go-to answer. And this is distinct from saying that utilitarianism is the master theory of value that works for all situations no matter what. […] And I would include in those stipulations that you are not violating what people conventionally conceive of as rights. And that's gonna get squishy little bit. If you say "so is it convention that matters?" I'm going to say no it's not convention exactly that matters, and then you start saying "well what is it?", I'm gonna have a little bit of a hard time pinning that down. But I would say that convention is a good first cut and I want to make a further claim that you really can do a lot with this. You know, if somebody's mission in life is to do as much good as possible, I think most of the good ways of doing that don't require a lot of lying or breaking promises or violently coercing people to do things.
[...]
Spencer: Maybe a better characterisation is that utilitarianism is something that a lot of effective altruists lean on for a bunch of considerations, but actually a lot of EAs are not pure utilitarians.
NB: I think that's right. I would say I'm not a pure utilitarian, I just use utilitarianism a lot, it's like generating a lot of my insight. If you had never heard of utilitarianism and you were trying to understand what I am trying to do by looking at my life, I think you'd have a hard time. But I don't think it's good or healthy to go so all-in on it. I'd like a better name for it. I've heard Tyler Cowen say two-thirds utilitarianism, I kind of like it.
### Tyler on Econtalk
Russ Roberts: Let's turn to a philosophical question, which is utilitarianism, which you write quite a bit about in the book. I think you define yourself as a 2/3 utilitarian. What do you mean by that?
Tyler Cowen: Well, that was a little tongue in cheek. But, I think if you are looking at a public policy, the first question you should ask should be the utilitarian question: Will this make most people better off? It's not the endpoint. You also need to ask about justice. And you should consider distribution. I think you should consider, say, how human beings are treating animals. You might want to consider other broader considerations. But that's the starting point. And if your policy fails the utilitarian test, I'm not saying it can never be good. But it has, really, a pretty high bar to clear. So, when I said "two thirds," that's what I meant.
---
This relates to [[=Holden Karnofsky]] on moderation in that Ezra Klein interview.
And I really want to read Tyler's book again. Again.
### Tyler in Stubborn Attachments
I sometimes call myself a “two-thirds utilitarian,” since I look first to human well-being when analyzing policy choices. If a policy harms human well-being, on net, it has a high hurdle to overcome. If “doing the right thing” does not create a better world in terms of well-being on a repeated basis, we should begin to wonder whether our conception of “the right thing” makes sense. That said, human well-being is not always an absolute priority—thus the half-in-jest reference to my two-thirds weighting for utility. We sometimes ought to do that which is truly just, even if it is painful for many people. I should not forcibly excise one of your kidneys simply because you can do without it and someone else needs one. We should not end civilization to do what is just, but justice does sometimes trump utility. And justice cannot be reduced to what makes us happy or to what satisfies our preferences.1
See also: [Holden Karnofsky on thin utilitarianism](https://forum.effectivealtruism.org/posts/iupkbiubpzDDGRpka/other-centered-ethics-and-harsanyi-s-aggregation-theorem#The_scope_of_utilitarianism); [Beneficentrism](https://www.philosophyetc.net/2021/12/beneficentrism.html)
## Notes
- [[Two-thirds utilitarianism]] - other 20% is excellence / moral perfectionism?
### Freewriting
I have warmed to hedonistic utilitarianism a lot in the past years, to the point where I want to endorse it as one of our best theories about what we owe to each other, while also discouraging myself (and others) from going "all in" on it. [^1] For that reason, the position which Tyler Cowen and Nick Beckstead have discussed under the heading of "two-thirds utilitarianism" is attractive. But I want to get clearer on what this actually involves—at least on my conception.
[^1]: The reasons for this are not themselves entirely utilitarian, but rather a stubborn attachment to pluralism, convention, and an "I don't like this" veto.
These discussions get confusing because utilitarianism involves a bundle of claims, roughly:
- What matters: only experiences of pleasure and suffering; everything else is instrumental to that.
- What we should do, in theory: maximise balance of pleasure over suffering.
- What we should do, in practice: whatever maximises balance of pleasure over suffering. Often this won't involve thinking in utilitarian terms, but instead following rules of thumb or common sense moral intuitions, including (probably) a bunch of self-centered relations.
On my conception, two-thirds utilitarianism takes improving the balance of pleasure over pain to be of central importance, and one of the main things to think about when we are thinking about what we owe each other.
But it does not take hedonic states to be the only things that matter. It's a pluralist position, which posits beauty and knowledge and freedom and diversity (among other things) as fundamental goods.
It is also non-consequentialist position, because it also recognises constraints, options and special obligations—tentatively claiming that many actions/rules/dispositions are not required even if they would lead to the best consequences. In a slogan: aim for the best consequences, unless you have very good reasons not to. We'll leave the assessment of reasons to your judgement, expecting individuals to reach different conclusions, and being happy about that (due to epistemic modesty and a meta-commitment to portfolio strategy).
Two-thirds utilitarianism is a *messy* position. It approves of individual attempts to construct crisp, tidy moral theories, but in the final analysis, it holds even the best of these theories at arms length [^2]. It is committed to case-by-case judgement calls whose reasons can't be fully articulated and codified, and about which we may face intractable incomprehension and disagreement. To some this will seem unsatisfying; to me it seems realistic. Some worry that the position is too conservative, or that it is disingenuously slippery. I think it occupies a sweet spot: it has the power to generate new important insights, but is also honest in its foregrounding of messiness, uncertainty, and individual judgement. In particular, it takes our particular contigent identity as humans at this moment in history seriously, rather than trying to abstract away to a more general perspective (e.g. Sidgwick's point of view of the Universe).
[^2]: Tyler sometimes speaks of "epistemic hovering" as a central virtue, though he has also written on [the value of dogmatism](https://sun.pjh.is/tyler-cowen-on-reasons-to-be-dogmatic)—portfolio strategy again.
Tyler likes to say "all thinkers are regional thinkers". Full-fat utilitarians tend to shy away from that idea. The two-thirds utilitarian tries to integrate it. #todo - say more
### Normative and practical ethics
In *some* circumstances:
- We should evaluate actions, policies, dispositions in terms of their expected consequences for pleasure and suffering.
- Such evaluation is the best place to start. It's rare that other considerations outweigh the results of that evaluation.
- The other considerations that most often outweigh stem from the conjunction of near-absolute respect for human rights and epistemic modesty.
- If your utilitarian calculation seems to recommend an action with high short-run costs (by utilitarian lights) or with very unpalatable characteristics (e.g. don't murder), you should nearly always not perform the acton, even if you can't see why the utilitarian calculation is wrong.
- Sometimes we leave utility on the table in order to respect human rights, and the justification for that is not always given in terms of utility.
Examples of such circumstances:
- Thinking about government policy.
- Reflecting on what to work on.
- Reflecting on where to donate.
- (?) Thinking about what character traits to cultivate
In *most* circumstances:
- People should act according to common sense ethics and morality.
One role of moral philosophers is to evaluate and suggest revisions to common sense morality. Moral theories like utilitarianism are one good source of ideas. It's also good to just look at the world—understanding material conditions, looking at how people actually value.
One huge attraction of utilitarian framework is that it counters scope insensitivity. It also widens our circle of concern.
2/3 utilitarian is aware of how hard it is to anticipate the consequences of even small changes to moral norms. The Singerian consequentialist is a bit more gung-ho, while the two-thirds utilitarian is more concerned that the theory is difficult to implement well, and tends to favour hedging with small changes plus a "try it and see" approach.
### Metaethics
It's basically the naturalist/pragmatist picture sketched by [[=Joshua Greene]].
Pleasure and suffering serve as a "common currency" to support coordination and political compromise between diverse groups.
The focus is on enabling diverse groups to coordinate and flourish. It is not on maximising the balance of pleasure over pain in the universe.
There is a background commitment to the realisation of a plurality of values and forms of life.
The thought is that utilitarianism is a partial story that captures an important core of what matters to people. We can build a lot of good things (by our own lights) on top of this story. Utilitarianism is thought of us our best ethical theory, but not an external authority that humanity must obey.
### Brief examples
Some things I would like to see society invest more in:
- More resources into genetic technologies to enable healthier children.
- Increase healthspan.
- Reducing catastrophic and existential risk.
These are all examples where I take the strong utilitarian argument to trump other concerns..
Some things I would not like to see:
-
Some things I am unsure about, but lean in favour:
- Normalise dying with dignity (including assisted dying and old age suicide) as part of a push to spend fewer resources on the very old.
I think we should i
### Criticisms
May be self-contradictory.
May be too conservative.
May be motivated by social acceptability.
### Major alternatives
A three-thirds utililitarian would view utilitarianism as a candidate for the one true theory of what matters, and how to act.
### Questions
- Is the thought that utilitarianism is basically correct, but it's just a very sharp and heavy sword that is difficult for individual humans to wield effectively?
- Or is the thought that utilitarianism is a partial story but powerful story, one that we can build a lot of good things on top of?
### Nick Beckstead on Clearer Thinking
Beckstead: I think there was a point in my life when I was like a diehard utilitarian, and I was like this is the way that things should be done. I think over time I have kind of backed off of that a little bit and I now have a more circumscribed kind of claim, that's kind of like **I can articulate some conditions under which I think a type of utilitarian reasoning is roughly right for a certain purpose**. If I was going to put a quick gloss on it, it would be like actions that are conventionally regarded as acceptable and you are happy to do them. There are some times in your life where basically what you are trying to do is to help people [sentient beings] and to help them impartially […] and you're trying to do good, you're trying to do it impartially and [...] there is no temptation to do anything sketchy with that, you're acting fully within your rights according to any common sense conception of how things are, and you're happy to do it, let's say it's a sacrifice you are happy to make, say it involves giving some money or spending some time. And I think [in circumstances such as these], as a first cut utilitarianism can be your go-to answer. And this is distinct from saying that utilitarianism is the master theory of value that works for all situations no matter what. […] And I would include in those stipulations that you are not violating what people conventionally conceive of as rights. And that's gonna get squishy little bit. If you say "so is it convention that matters?" I'm going to say no it's not convention exactly that matters, and then you start saying "well what is it?", I'm gonna have a little bit of a hard time pinning that down. But I would say that convention is a good first cut and I want to make a further claim that you really can do a lot with this. You know, if somebody's mission in life is to do as much good as possible, I think most of the good ways of doing that don't require a lot of lying or breaking promises or violently coercing people to do things.
[...]
Spencer: Maybe a better characterisation is that utilitarianism is something that a lot of effective altruists lean on for a bunch of considerations, but actually a lot of EAs are not pure utilitarians.
NB: I think that's right. I would say I'm not a pure utilitarian, I just use utilitarianism a lot, it's like generating a lot of my insight. If you had never heard of utilitarianism and you were trying to understand what I am trying to do by looking at my life, I think you'd have a hard time. But I don't think it's good or healthy to go so all-in on it. I'd like a better name for it. I've heard Tyler Cowen say two-thirds utilitarianism, I kind of like it.
### Tyler on Econtalk
Russ Roberts: Let's turn to a philosophical question, which is utilitarianism, which you write quite a bit about in the book. I think you define yourself as a 2/3 utilitarian. What do you mean by that?
Tyler Cowen: Well, that was a little tongue in cheek. But, I think if you are looking at a public policy, the first question you should ask should be the utilitarian question: Will this make most people better off? It's not the endpoint. You also need to ask about justice. And you should consider distribution. I think you should consider, say, how human beings are treating animals. You might want to consider other broader considerations. But that's the starting point. And if your policy fails the utilitarian test, I'm not saying it can never be good. But it has, really, a pretty high bar to clear. So, when I said "two thirds," that's what I meant.
---
This relates to [[=Holden Karnofsky]] on moderation in that Ezra Klein interview.
And I really want to read Tyler's book again. Again.
### Tyler Stubborn Attachments
I sometimes call myself a “two-thirds utilitarian,” since I look first to human well-being when analyzing policy choices. If a policy harms human well-being, on net, it has a high hurdle to overcome. If “doing the right thing” does not create a better world in terms of well-being on a repeated basis, we should begin to wonder whether our conception of “the right thing” makes sense. That said, human well-being is not always an absolute priority—thus the half-in-jest reference to my two-thirds weighting for utility. We sometimes ought to do that which is truly just, even if it is painful for many people. I should not forcibly excise one of your kidneys simply because you can do without it and someone else needs one. We should not end civilization to do what is just, but justice does sometimes trump utility. And justice cannot be reduced to what makes us happy or to what satisfies our preferences.1
### Tyler somewhere
I would say I’m a consequentialist but there’s a relativistic element to my consequentialism. So questions like, “How many happy plants are worth the life of one baby?” — Maybe there can never be enough. But, I suspect the question just isn’t well-defined. How many dogs should die rather than one human being? I don’t even know what the units are. So, I think the utilitarian part of consequentialism only makes sense within frameworks where there’s enough commonality to compare wellbeing.