When making important personal decisions, utilitarianism offers one useful framework.
It is best to see it as one among several useful perspectives to consider.
It is really important to be clear on the distinction between theory of value and how to act.
When doing theory of value, it might seem quite obvious that, once you've got a story about what is valuable, you can say that more value is better. But it does not follow from this, at all, that individual humans should go around trying to maximise value, consciously or otherwise.
Firstly, there's the pragmatic question of what ways of thinking and acting actually lead humans to maximise value.
There's also the issue which Sidgwick called "the duality of practical reason" namely, the tension between personal ends (sometimes called partial or egoistic) and the moral call to impartially improve the world.
(One might wonder just how coherent the notion of "impartially improving the world", or "the perspective of the universe" even is. That's for another note. (Maybe [[Sidgwick, Singer, Nietzsche on intuitions]]?)
---
[[=Nick Beckstead]] on his personal relationship with utilitarianism:
> I think there was a point in my life when I was like a diehard utilitarian, and I was like this is the way that things should be done. I think over time I have kind of backed off of that a little bit and I now have a more circumscribed kind of claim, that's kind of like **I can articulate some conditions under which I think a type of utilitarian reasoning is roughly right for a certain purpose**. If I was going to put a quick gloss on it, it would be like actions that are conventionally regarded as acceptable and you are happy to do them. **There are some times in your life where basically what you are trying to do is to help people [sentient beings] and to help them impartially** […] and you're trying to do good, you're trying to do it impartially and [...] there is no temptation to do anything sketchy with that, you're acting fully within your rights according to any common sense conception of how things are, and you're happy to do it, let's say it's a sacrifice you are happy to make, say it involves giving some money or spending some time. And I think [in circumstances such as these], as a first cut utilitarianism can be your go-to answer. And this is distinct from saying that utilitarianism is the master theory of value that works for all situations no matter what. […] And I would include in those stipulations that you are not violating what people conventionally conceive of as rights. And that's gonna get squishy little bit. If you say "so is it convention that matters?" I'm going to say no it's not convention exactly that matters, and then you start saying "well what is it", I'm gonna have a little bit of a hard time pinning that down. But I would say that convention is a good first cut and I want to make a further claim that you really can do a lot with this.
On utilitarianism within effective altruism:
> Spencer: Maybe better characterisation is that utilitarianism is something that a lot of EAs lean on for a bunch of considerations, but actually a lot of EAs are not pure utilitarians.
>
> NB: I think that's right. I would say I'm not a pure utilitarian, I just use utilitarianism a lot, it's like generating a lot of my insight. If you had never heard of util and you were trying to understand what I am trying to do by looking at my life, I think you'd have a hard time. But I don't think it's good or healthy to go so all-in on it. **I'd like a better name for it. I've heard Tyler Cowen say two-thirds utilitarianism, I kind of like it.**
Lukas Gloor on moral anti realism
https://forum.effectivealtruism.org/posts/C2GpA894CfLcTXL2L/against-irreducible-normativity
In his essay “A Critique of Utilitarianism,” Bernard Williams (Williams, 1973) argued that there is something wrong with the utilitarian thought process. If someone believes utilitarianism is the right moral theory, there’s an important sense in which there is no room for the person to choose their life projects. According to utilitarianism, what people ought to spend their time on depends not on what they care about but also on how they can use their abilities to do the most good. What people most want to do only factors into the equation in the form of motivational _constraints_, constraints about which self-concepts or ambitious career paths would be long-term sustainable. Williams argues that this utilitarian thought process alienates people from their actions since it makes it no longer the case that actions flow from the projects and attitudes with which these people most strongly identify.
Williams framed this as an argument against utilitarianism. However, I find that what he was objecting to is primarily the existence of external moral obligations (perhaps _in conjunction with_ consequentialism).[[11]](https://forum.effectivealtruism.org/#fn-nnPxpBkJmpavKnzXK-11) Williams's critique misses the mark for people who think of utilitarianism (or consequentialism more generally) as a _personal philosophy_. Under anti-realism, the arguments for consequentialist morality don’t disappear—they take on a different, less prominent place in people’s philosophical framework. Instead of “utilitarianism as the One Compelling Axiology,” we consider it as “utilitarianism as a personal, morally-inspired life goal.”
....
Moral realism or not, our choices remain the same. What’s conceptualized differently, under anti-realism, is that we no longer frame them about what’s (externally) moral, but **about what sort of person we want to be and what we want to live for**. Shouldering the responsibilities of consequentialism—if we decide to go down that road—_won’t_ feel like an attack against our integrity, since we’d be choosing it freely.[[12]](https://forum.effectivealtruism.org/#fn-nnPxpBkJmpavKnzXK-12)
Other people’s life choices may differ from ours. In some instances, we might be able to point out that they’re committing an error that they might recognize by their criteria. In that case, normative discussions can remain fruitful. Unfortunately, this won’t work in all instances. **There will be cases where no matter how outrageous we find someone’s choices, we cannot say that they are committing an error of reasoning.**