**Update 2022-07**: my working note on [[Consequentialism]] is better than the below. It's on my list to re-draft or delete the below.
---
Consequentialism, roughly characterised, is the claim that the best action is whatever leads to the best consequences. [0]
This almost sounds like a platitude, but it’s actually an ambitious and revisionist theory which includes several controversial claims. I’ll highlight two:
1. *Only consequences matter*: for the moral evaluation of actions, everything depends on consequences. In theory, we can ignore factors that have no consequential effects, including intentions, past commitments, justice, and the character of the act itself.
2. *We must maximise*: ~~given a choice between two actions with positive consequences, we must always pick the one with the best consequences. It is never best to leave some positive consequences “on the table”, so to speak.~~ Edit 2022-07: this characterisation is misleading.
There’s been much discussion and refinement of consequentialist theories. The SEP article is a fine place to start [1]. I don’t yet have a great understanding of all the most plausible forms of consequentialist theory. As things stand, my favourite flavour might be something like “total pluralistic not-sure-about-maximising scalar consequentialism with sincere respect for a narrow class of nearly-absolute deontological side-constraints”. Some days I’m tempted to replace “pluralistic” with the narrower “welfarist”, where my account of welfare would probably involve some kind of agent-relative objective list [2]. Sometime I’ll write a note to go through this, but I’ll try to avoid the weeds for now...
Here I’ll layout some pros and cons of consequentialism as a story about what makes actions right (rather than as a decision procedure that people should actually use). One must keep this distinction top of mind to give consequentialism a fair hearing: consequentialism does not entail that we should spend most of our time thinking in consequentialist terms, but rather that we should think in ways that generate the best consequences (which might well be in terms of following common sense, respecting rules, cultivating virtues, etc). Thinking about long term consequences seems fiendishly difficult, is probably not something anyone should try on a daily basis (c.f. cluelessness [3]).
# Some strengths of consequentialism
* Historically, moral thought had a tendency towards crude absolutism (hmm... is that true, or just a caricature? I should read the Malik book), attempts at moral innovation were sometimes unduly stigmatised, and resources for thinking about trade offs were... limited. A focus on observable consequences seems to provide a secular common ground for moral thought, which makes it easier to debate, revise, tradeoff and compromise, with sensitivity to the complexity and contingency of things.
* Consequentialism seems to threaten some conceptions of unconditional commitment. I’ve come to see this as feature, not a bug.
* A natural question might be: would it be right to break a promise in order to get the best consequences? We want to be able to say “sometimes yes, and sometimes no”, and on the face of it, consequentialism permits that.
* If you’re a pluralist, you might include kept promises in your definition of good consequences. If so, you can answer the question by comparing the disvalue of breaking the promise to the extra value achieved.
* Another option is to think of the obligation to keep promises as a constraint on the range of possible actions. For the absolutist, it is an absolute constraint, almost like a physical law; for the nearly-absolutist, it is more like a societal law (very strong presumption against breaking it; if you’re thinking about breaking it you’re probably wrong; if you do break it you should expect punishment unless you have a strong justificatory story).
* If we zoom out and consider the history of the universe, then of course, only what /actually happens/ matters. We can’t change the past, we should focus on making the best of the future.
* It seems right to say that more of the good stuff and less of the bad stuff is better, all else equal.
* But I’m nervous about theories that say that there is only one right action / rule / culture, and all the others are wrong. Maximising consequentialisms say something like this at some level (“only the best will do”), but consequentialism actually lends itself to ranking acts / rules / cultures on a cardinal scale.
* There may be some level of abstraction at which we should endorse maximisation. This may be a place where my intuitions unhelpfully conflate the distinction between how we should think in practice versus how we should think in theory. Unsure.
* I find it hard to avoid sliding into pragmatism when I’m reflecting on moral theory. Maybe due to antirealist tendencies...
* The consequences of our actions have always been important. Their importance grows as our capabilities increase. Technological progress means we now have the ability to damage the environment or prevent diseases that would otherwise afflict millions. So it matters, more than before, what we decide to do, because the potential consequences are greater.
## Some issues with consequentialism
* [@TODO this is missing the key thing] Why think we should maximise, not satisfice, on consequences? [4]
* Consequentialism provides some structure—it tells us look to the future which we can affect, but it does not tell us what to count as good or bad. So we need to fill in the blank with a substantive theory of value.
* At this point, those with strong impartialist intuitions ask themselves: ok, let’s forget what humans randomly happen to value… what does reason say we should want? What would all possible minds want? What does the universe want? And they answer: “pleasure, not pain”, “preference satisfaction” or something like that.
* Strong impartialism may be maladaptive. If an alien civilisation threatened us, it would not be obvious to the impartialists that they should fight on the side of humanity. Instead they would ask: which civilisation will achieve the best consequences? (This isn’t a hypothetical: some Western consequentialists ask this question about Western vs Chinese values.)
* If the relevant consequences are particularly narrowly construed (e.g. it’s just pleasure and pain), this is very counterintuitive, and the burden of proof is on the consequentialist to justify that narrowing.
* Another option is to ask: what would count as good consequences, by our own lights, where “our” is defined more narrowly than “all possible minds”. A natural place to start for the moderate impartialist might be to think in terms of the lights of humanity, some idealised version of humanity as it currently stands, or perhaps an idealised version of one’s own culture. Williams calls this “the human prejudice”. Most of the time, I think that some version of “idealised contemporary culture” is the way to go.
* This commits us to a fundamental acceptance of and allegiance to who we are, what we just happen to be, and how we happen to think right now. Some people find this disturbingly arbitrary. I don’t. Or at least… I struggle to see a more solid / timeless / eternal option.
* Moral progress is possible on this picture, it’s just always by our own lights, which themselves evolve. On this picture, moral progress involves incremental change: any given change must command widespread assent, though after a large number of changes, the society at the beginning of the chain might not endorse the society at the end of the chain. On this picture, far-sighted moral reformers are required to have a degree of democratic patience, despite the apparently hideous costs of delay. If they want to speed things up, they need to learn how to change who we are.
[0] I’ll use “action” here for brevity, but you can replace with “rule”, “culture” or whatever is your favourite focus of evaluation.
[1] See here for a longer list of claims associated with classical utilitarianism: https://plato.stanford.edu/entries/consequentialism/#ClaUti
[2] https://plato.stanford.edu/entries/well-being/
[3] http://users.ox.ac.uk/~mert2255/papers/cluelessness.pdf; https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/; https://www.flightfromperfection.com/cluelessness-what-to-do.html
[4] Writing this I realised there are some important papers on this. I’ve not read them yet. #todo https://philpapers.org/browse/maximizing-and-satisficing-consequentialism; https://www.princeton.edu/~ppettit/papers/1984/Satisficing%20Consequentialism.pdf; https://www.cambridge.org/core/journals/utilitas/article/against-satisficing-consequentialism/247AFF8D4B350823C5CE2CCF346F5CD8. See also https://plato.stanford.edu/entries/supererogation/