## Relating to utilitarianism Spencer: Maybe a better characterisation is that utilitarianism is something that a lot of effective altruists lean on for a bunch of considerations, but actually a lot of EAs are not pure utilitarians. NB: I think that's right. I would say I'm not a pure utilitarian, I just use utilitarianism a lot, it's like generating a lot of my insight. If you had never heard of util and you were trying to understand what I am trying to do by looking at my life, I think you'd have a hard time. But I don't think it's good or healthy to go so all-in on it. I'd like a better name for it. I've heard Tyler Cowen say two-thirds utilitarianism, I kind of like it. --- Most of the interesting action is going to be around: when is this type of [utilitarian] calculation appropriate and helpful and when is it not? ## Spencer Greenberg interview --- Nagel and Scheffler Constraints, options and special obligations. Constraints: cases where what you're doing might seem or even be best on the utilitarian calculus, but it still seems wrong for some reason. E.g. lynch an innocent to placate the mob. "Constraint-bound utilitarianism". Sometimes it is worth it to lie, e.g. as a spy bringing down a bad regime. Options: bunch of things you are fully within your rights to do even though they do not have the best consequences. Live your own life without harming other people, without violating constraints. Relax utilitarianism a bit to say you don't have to do the very best thing. --- I think there was a point in my life when I was like a diehard utilitarian, and I was like this is the way that things should be done. I think over time I have kind of backed off of that a little bit and I now have a more circumscribed kind of claim, that's kind of like **I can articulate some conditions under which I think a type of utilitarian reasoning is roughly right for a certain purpose**. If I was going to put a quick gloss on it, it would be like actions that are conventionally regarded as acceptable and you are happy to do them. There are some times in your life where basically what you are trying to do is to help people [sentient beings] and to help them impartially […] and you're trying to do good, you're trying to do it impartially and [...] there is no temptation to do anything sketchy with that, you're acting fully within your rights according to any common sense conception of how things are, and you're happy to do it, let's say it's a sacrifice you are happy to make, say it involves giving some money or spending some time. And I think [in circumstances such as these], as a first cut utilitarianism can be your go-to answer. And this is distinct from saying that utilitarianism is the master theory of value that works for all situations no matter what. […] And I would include in those stipulations that you are not violating what people conventionally conceive of as rights. And that's gonna get squishy little bit. If you say "so is it convention that matters?" I'm going to say no it's not convention exactly that matters, and then you start saying "well what is it", I'm gonna have a little bit of a hard time pinning that down. But I would say that convention is a good first cut and I want to make a further claim that you really can do a lot with this. You know, if somebody's mission in life is to do as much good as possible, I think most of the good ways of doing that don't require a lot of lying or breaking promises or violently coercing people to do things. Beckstead also sees util as helpful framework but partial story. Special obligations: promises, family. Is there a name for this?? No. Let's respect these ideas in practice, but I'm still really into the idea of doing as much good as possible with part of my life. Spencer: Maybe better characterisation is that utilitarianism is something that a lot of EAs lean on for a bunch of considerations, but actually a lot of EAs are not pure utilitarians. NB: I think that's right. I would say I'm not a pure utilitarian, I just use utilitarianism a lot, it's like generating a lot of my insight. If you had never heard of util and you were trying to understand what I am trying to do by looking at my life, I think you'd have a hard time. But I don't think it's good or healthy to go so all-in on it. I'd like a better name for it. I've heard Tyler Cowen say two-thirds utilitarianism, I kind of like it. You know, with consequentialism, whether you're doing act or rule or something else, it feels a bit academic to me. In the sense of remote from life or something. I find it really kind of telling that it doesn't, like, come up. You have this community of effective altruists who kind of nerd out like crazy on this stuff, and you know, every kind of conceivable thing that could matter they talk about within this vast family. And yet it's never really come up. As in, the reason I'm donating to charity A rather than charity B this year is because I'm a rule consequentialist. [...] Maybe you do get a little bit of that. Like people who donate to Wikipedia who think I ought to do it because I benefit from it, even though I don't think its the optimal charity. And you know I think that's perfectly reasonable and I think maybe people should do it a little bit more. But I just feel like it's not where the action is. I think where the action is, is the population ethics piece and the aggregation. Nailing that down feels like The thing where if you're really deploying this, its the stuff you kind want to get right. Spencer: Yes, it matters a lot what you choose. --- Beckstead is an anti-realist. Maybe some kind of expressivist. We are starting with cares we have. Aliens would probably end up in different place. Spencer: What would you say to someone who is an anti-realist about morality but thinks utilitarianism is kind of the right theory of ethics? Beckstead: What do you mean by "right"? Do you mean "it's the theory you propose, and would like us all to follow"? Or do you mean "it has arguments, such that a lot of other humans if they understood them properly, would come to endorse them"? When we're talking about meta-ethics, I'm not sure that all the questions are super well posed. I like to bring all of this back to "well, is there a prediction we disagree about" or "do we have a proposal we disagree on". #todo Von Neumon Morgenstorn axioms ?? These theorems are proving that you should have preferences that can be represented by numbers and you're trying to maximise the expected satisfaction of those preferences. They could be the opposite of utilitarian preferences. Beckstead preferred terminology: - Welfare or wellbeing for theory of wellbeing - Utility for expected utility theory Economists walk around assuming preference theory of utility. Maybe they also kind of assume that somehow peoples utility is always self-regarding. They talk about preference satisfaction and sort of think you're talking about yourself here, and not necessarily other things. But I think for a lot of people really they care about the rest of the world more than they care about themselves. I think there are a lot of people who, if you were like, "would you heroically sacrifice yourself to save your country or planet or family or platoon", they'd say "well, yes I would". Spencer: often see conflation between GDP & wellbeing. Thing that comes closest to proving we should be utilitarians is Harsayni's aggregation theorem. John Broome Weighing Goods and Weighing Lives. #todo Favourite way to nail down most useful theory of aggregation. __Weighing Goods.__ You need a neutral level. Where adding or removing doesn't change anything about value of world. Read Beckstead's PhD. #todo Intro to TU and pop ethics. What is best? Look at the Longtermism syllabus. Total Utilitarianism basically works in practice In most contexts. Wellbeing function. Method: wellbeing function for individuals, then aggregate to get utility function for the world. Issue: only works with fixed population. If you bought this, then you really should be just adding things up when what you're doing is not affecting population sizes. Obviously the theory doesn't tell you about the consequences, but it gives you conceptual clarity about what you're trying to do. #todo