You wrote: > EA seems to be the perfect ethical philosophy for our times. When it comes to “doing good,” it quantifies and abstracts. Things like family, friendships or personal connections are to count, literally — because EA seems to be all about counting — for nothing. Nothing is sacred — and please note that I’m using that term very carefully, deliberately, and non-romantically. Nothing is sacred, in the sense that everything is fungible and commensurable. (What does that remind one of?) EA is supposedly about more “fairness” and less “suffering,” but it’s careful to not ground any of that in what could be called personal. We’re all individuals, and there is, apparently, no such thing as society. It’s all globalism, no localism. What there is, is “humanity,” an appropirately abstracted entity that EAs seem to be as happy to bandy about and wield as Stalinists were back in the days. If this is ethical, then it’s an industrialized ethics. Scaled up, and essentially inhuman. Self-transcendence my ass. Truly the ethics that we deserve. Some thoughts in reply... > When it comes to “doing good,” it quantifies and abstracts. A good place to start is: what motivates the abstraction? I think the answer is: empathy for the concrete particular, combined with a worry that our moral intuitions are unreliable. Potentially severe sources of error include: scope insensitivity, status quo bias and insidious forms of prejudice. C.f. [[David Runciman]] on Bentham; [Scope Insensitivity](https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity); [Shut Up and Multiply](https://www.lesswrong.com/tag/shut-up-and-multiply); [The Path to Reason](https://putanumonit.com/2020/10/08/path-to-reason/); [Against Empathy](https://www.youtube.com/watch?v=uF3EsdhasN0). Introspectively, I notice that the news of 100,000 deaths does not feel 100x worse than the news of 1000. But one doesn't have to be an arch-rationalist to recognise that the former is, in fact, about 100x worse. It seems our intuitive-emotional range for "strength of care" is very narrow compared to the potential consequences of some decisions that we now face. This is one major reason that abstraction and quantification is important, in some circumstances. > Things like family, friendships or personal connections are to count, literally — because EA seems to be all about counting — for nothing. It's true that for hedonists, what ultimately matters is pleasure and suffering. On that picture, the values you just mentioned are "merely" instrumental. Although hedonism is taken seriously by some people within the EA community, my impression is that most people see it as "one useful perspective", while some might go so far as to call it "our least bad theory of value". At the level of beliefs, I think most EAs take [moral uncertainty](https://forum.effectivealtruism.org/tag/moral-uncertainty) seriously enough that they remain very open to the possibility that there are other terminal values. Whatever you think about hedonism vs pluralism, in practice you're still going to care a lot about family, friendships and so on, be it for instrumental or intrinsic reasons. A cartoon utilitarian might be expected to abandon their grandparents and betray their friends when an expected utility calculation tells them to, but actually sane utilitarians would almost never consider such things—a society full of humans thinking or acting this way seems... not viable, to put it mildly. Peter Singer, famously, made the drowning child argument in [Famine, Affluence and Morality](https://en.m.wikipedia.org/wiki/Famine,_Affluence,_and_Morality), but also spent a lot of money on private nursing care for his mother, who had severe Alzheimer's. I think the "*almost* never" here is a virtue: we *can* imagine extreme situations where a wise and virtuous person might decide to forsake a personal attachment for the "greater good". Self-sacrifice—putting the good of the community above your own—is sometimes justified. Sane consequentialists emphasise the distinction between "decision procedures" (which help us decide how to act) and "criteria of rightness" (things that ultimately make an action good or bad). Although consequentialist theories claim that the best action is the one with the best consequences, it does not follow at all that humans should think in consequentialist terms when considering how to act. Much of the time, their best bet is presumably to rely on standard rules, intuitions and shortcuts, including things like "focus on taking care of friends, family and local community". Consequentialism is, to that extent, a self-effacing theory. Sometimes, however, people might do well to summon up a consequentialist perspective—for example when thinking about their careers. Learning when and how to do this is hard. (Compare "thinking fast and thinking slow", where System 1—"automatic mode"—works most of the time, but System 2—"manual mode"—is sometimes called for). Finally, I'll just flag that while all EAs hold the condition-of-sanity belief that consequences are important, it is certainly not the case that all EAs are "all in" on consequentialism (i.e. are all people who think that *only* consequences matter), and there are many for whom it is not even the view they place most credence on. All this said, I do worry that consequentialists in general, and EAs in particular, aren't doing a good job of thinking and talking about the idea of "impartiality". There's a long-running debate about agent-neutral vs agent-relative reasons (reasons that anyone has vs reasons that only particular individuals have) and how these should be balanced. Sometimes people end up thinking that the only reasons that "really" matter are agent-neutral ones. I strongly disagree with this! (At least as a way for humans to think). I hope to write more on this at some point. For now, I can suggest [[Susan Wolf]], [[Bernard Williams]], [[=Thomas Nagel]] or a book I haven't read yet, but which looks good: _Beyond Seflessness_, by Christopher Janaway. > Nothing is sacred — and please note that I’m using that term very carefully, deliberately, and non-romantically. Nothing is sacred, in the sense that everything is fungible and commensurable. (What does that remind one of?) David Runciman says that John Rawls criticised utilitarianism on these lines. I do think there's something important here—it's one of the reasons I'm not a utilitarian. The Scheffler reading I assigned for the Bostrom salon touches on this, quoting Jerry Cohen. I'm not sure what's going on here, but I think it may have to do with the fact that while we actually can (kind of) imagine a "point of view of the universe", we are, in fact, thrown into particular positions in the space of possible worlds, with all sorts of existing attachments. Given that we are in such a position, the relevance of "the point of view of the universe" is somewhat limited—we are in fact much more constrained, in both good and bad respects. Perhaps a more intuitive comparison is "the bureacratic" vs "the personal" point of view: the former treats people as generic subjects to a system of rules, actively avoiding special treatment. Sometimes, clearly, this kind of abstraction is "worth it". E.g. money, supermarkets, Accident & Emergency. > EA is supposedly about more “fairness” and less “suffering,” but it’s careful to not ground any of that in what could be called personal. Yes. EA is mostly concerned with how to improve the world from an impartial perspective—starting with the "care about everyone equally" ideal, rather than "care about just people close to me". This is taken to be the essence of "altruism". Personally, I think there's something a bit "off" here: both philosophically (c.f. [[Bernard Williams]], [[=Joe Carlsmith]] on alienation) and practically. Practically, I'm wary of an approach where the direction of travel is mostly from theory to practice. Theory is often illuminating and generative, and we can never do without it (our theories, tacit or explicit, shape our perceptions). But I worry about and sometimes see a failure mode where people who do a lot of abstract theory end up forgetting that they are humans. This leads them to go wrong in their beliefs, and to make unhelpful suggestions when it comes to action. It also leads to "dumb" failures of communication. > If this is ethical, then it’s an industrialized ethics. Scaled up, and essentially inhuman. Yes. Recall the "motivation for abstraction" points I made earlier. One reason that abstraction is necessary is that modern humans sometimes find themselves in decision scenarios very unlike those faced by our ancestors. In such cases, I think a degree of "inhuman" (in the sense of "not-typically human") thinking should sometimes be welcomed. Maybe you'd accept all this, and the pressing questions and cruxes are more about the details: "when" and "how" and "how much" should we activate these unusual styles of reasoning? I don't have good general answers to these questions—it seems like an art that is difficult to formalise. My hunch is that average citizens of rich countries use these modes too little, but at least some EAs use them too much. C.f. [Leopold Aschenbrenner on Burkean Longtermism](https://www.forourposterity.com/burkean-longtermism/); [Jacob Falkovich on The Path to Reason](https://putanumonit.com/2020/10/08/path-to-reason/). What do you think? --- Next: [[2021-08-20 What V thinks (A reply to P, replying to V)]]