## On "Deep Pragmatism" With a little perspective, we can use manual-mode thinking to reach agreements with our " heads" despite the irreconcilable differences in our " hearts." This is the essence of deep pragmatism: to seek common ground not where we think it ought to be, but where it actually is. ## Moral Tribes ### Selected highlights See also: [All my highlights](https://docs.google.com/document/d/14ei6RWv8ZyHsERPDkvOGU7djsasEbWjkHrkG5fnRzXE/edit#heading=h.pbhzenof5o9z) **Once we understand how quirky and contingent our moral intuitions are, we'll see the need for a rational meta morality.** **Hope is that by holding up a mirror - showing where our gut reactions come from - helps us decide whether they are useful or not, rather than just uncritically endorsing them.** **We've underestimated utilitarianism because we've overestimated our minds. We've mistakenly assumed that our gut reactions are reliable guides to moral truth.** **People think that doing whatever works best is splendid, until they realise that what they really want is not necessarily what works best.** Still, perhaps doing whatever works best really is a splendid idea. That's the thought behind utilitarianism, a thoroughly modern philosophy that is easily mistaken for simple common sense. **Utilitarianism is more than an injunction to be pragmatic. Utilitarianism is about core values. It's about taking pragmatism all the way down to the level of first principles.** it begins with a core commitment to doing whatever works best, whatever that turns out to be, and even if it goes against one's tribal instincts. I prefer to think of utilitarianism as deep pragmatism. According to consequentialism, ultimate goal should be to make things go as well as possible. But what do we mean by well? What makes some consequences better than others? **Utilitarianism gives a specific answer to this question. If we combine the idea that happiness is what matters with the idea that we should try to maximize good consequences, we get utilitarianism.** **Values ultimately derive their value from their effects on our experience.** happiness is what matters, and everyone's happiness counts the same. This doesn't mean that everyone gets to be equally happy, but it does mean that no one's happiness is inherently more valuable than anyone else's. The higher pleasures are better at least sometimes not because they're better pleasures but because they're pleasures that serve us better in the long run. Utilitarianism combines the Golden Rule's impartiality with the common currency of human experience. This yields a moral system that can acknowledge moral trade-offs and adjudicate among them, and it can do so in a way that makes sense to members of all tribes. Bentham and Mill put aside the inflexible automatic settings and instead of two very abstract questions. First: what really matters? Second: what is the essence of morality? They concluded that **experience is what ultimately matters,** and that **impartiality is the essence of morality.** Seeking a moral system that works - that we generally find satisfactory - is different from seeking the moral truth, and does not presuppose it. for our purposes, shared values need not be perfectly universal. They just need to be shared widely, shared by members of different tribes have disagreements we might help to resolve by appeal to a common moral standard. [...] what matters is that this "we" is very large and very diverse including members of all tribes **Understanding utilitarianism requires the dual-process framework.** **Utilitarianism is the naive philosophy of the human manual mode, and all of the objections to utilitarianism are ultimately driven by automatic settings. No one's automatic reactions are consistently utilitarian. Thus, our dual-process more brains make utilitarianism seem partially, but not completely, right to everyone.** we all get utilitarianism because we all have the same manual mode, and we're all offended by utilitarianism because we all have non-utilitarian automatic settings. What exactly is problem solving? In the lingo of artificial intelligence, solving a behaviour problem is about realising a goal state. A problem solver begins with an ideal representation of how the world could be and then operates behaves on the world so as to make the world that way. **Utilitarianism can be summarised in three words: maximize happiness impartially.** the maximise part comes from the human manual mode, which is, by nature, a device for maximizing. This, I claim, is universal - standard issue in every healthy human brain. the manual mode doesn't come with a moral philosophy, but **it can create one if it's seeded with two universally accessible moral values: happiness and impartiality.** this combination yields a complete moral system that is accessible to members of all tribes. This gives us a pathway out of the morass, assistant for transcending are incompatible visions of the moral truth. utilitarianism may not be the moral truth, but it is, I think, the meta morality that we are looking for. the worry that utilitarianism is too demanding is a devastating worry only if we expect ourselves to be perfect utilitarians, and trying to be a perfect utilitarian is, in fact, a very un utilitarian thing to do **utilitarianism will forgive you for nurturing personal relationships and interests but is this something for which you need to be forgiven?** [...] perhaps if we stop back far enough for my human values we can see you that they are not ideal come up even as we continue to embrace them. imagine that you're in charge of the universe and you've decided to create a new species of intelligent, sentient beings. [What values would they have?] **I'd rather be a human who knows that he's a hypocrite, and tries to be less so, than one who mistakes his species typical moral limitations for Ideal values.** **I am not claiming that utilitarianism is the moral truth. Nor do I claim that perfectly captures and balances all of human values. My claim is simply that it provides a good common currency for solving real-world law moral disagreements.** If the you dirty monsters and the rabbits ever arrive demanding their utility hringdu we may have to amend our principles. Or maybe they would have a good point, will be at one that we would have a hard time appreciate. Our laws have to say something. We have to choose, and unless we're content to flip coins, or allow that might makes right, we must choose for reasons. We must appeal to some moral standard or another. When to shift from automatic to manual mode? When there is controversy. **Simply forcing people to justify their opinions with explicit reasons does very little to make people more reasonable, and may do the opposite.** But forcing people to confront their ignorance of essential facts does make people more moderate. [Socrates] Instead of simply asking politicians and pundits why they favour the policies they favour, first ask them to explain how their favoured (and disfavoured) policies are supposed to work. **Rationalisation is the great enemy of moral progress.** The moral rationaliser feels a certain way and then makes up a rational-sounding justification for that feeling. #toread Greene 2007 The Secret Joke of Kant's soul - on rationalisation. Thus, appeals to "rights" function as an intellectual free pass, a trump card that renders evidence irrelevant. Whatever you and your fellow tribes people feel, you can always posit the existence of a right that corresponds to your feelings. If you feel that abortion is wrong you can talk about a right to life. [...] "Rights" are nothing short of brilliant. They allow us to rationalise our gut feelings without doing any additional work. Rights and their mirror images, duties, are the perfect rhetorical weapons for modern moral debate. "Not to be done" => rights "Must be done" => duties We embattled moralists love the language of rights and duties because it presents our subjective feelings as perceptions of objective facts. By appealing to rights we excuse ourselves from the hard work of providing real, non-question-begging justifications for what we want. Claim ms about what will or won't promote the greater good are ultimately accountable to evidence. Arguing about rights may be pointless, but sometimes arguing is pointless. Sometimes what you need is not arguments, but weapons. And that's when it's time to stand up for rights. As deep pragmatists, we can appeal to rights when moral matters have been settled. [...] Our appeals to rights may serve as shields, protecting our moral progress from the threats that remain. If we are truly interested in persuading our opponents with reason, then we should eschew the language of rights. This is because we have no non-question-begging (and nonutilitarian) way of figuring out which rights really exist and which rights take precedence over others. But when it's not worth arguing - either because the question has been settled or because our opponents can't be reasoned with - then it's time to stop arguing and rally the troops. It's time to affirm our moral commitments, not with wonky estimates of probability, but with words that stir our souls. At some point, it dawns on you: Morality is not what generations of philosophers and theologians have thought it to be. Morality is not a set of standing abstract truths that we can somehow access with our limited human minds. Moral psychology is not something that occasionally intrudes into the abstract realm of moral philosophy. Moral philosophy is a manifestation of moral psychology. Moral philosophies are, once again, just the intellectual tips of much bigger and deeper psychological and biological icebergs. Once you've understood this, your whole view of morality changes. Figure and ground reverse, and you see competing moral philosophies not just as points in an abstract philosophical space but as the predictable products of our dual-process brains. There are three major schools of thought in Western moral philosopy: utilitarianism/consequentialism (à la Bentham and Mill), deontology (à la Kant), and virtue ethics (à la Aristotle). These three schools of thought are, essentially, three different ways for a manual mode to make of the automatic settings with which it is housed. We can use manual-mode thinking to explicitly _describe_ our automatic settings (Aristotle). We can use manual mode thinking to _justify_ our automatic settings (Kant) And we can use manual-mode thinking to _transcend_ the limitations of our automatic settings (Bentham and Mill). With this in mind, a quick psychological tour of Western moral philosophy As an ethicist, Aristotle is essentially a tribal philosopher. Read Aristotle and you I will learn what it means to be a wise and temperate ancient Macedonian-Athenian aristocratic man. And you will also learn things about how to be a better human, because some lessons for ancient Macedonian-Athenian aristocratic men apply more widely. But Aristotle will not help you figure out whether abortion is wrong, whether you should give more of your money to distant strangers, or whether developed nations should have single-payer healthcare systems. Aristotle's virtue based philosophy, with its grandfatherly advice, simply isn't designed to answer these kinds of questions. One can't resolve tribal disagreements by appeal to virtues, because one tribe's virtues are another tribe's vices-if not in general, then at least when tribes disagree. The great hope of the Enlightenment was that philosophers would construct a systematic, universal moral theory-a metamorality. But as we've seen, philosophers have failed to find a metamorality that _feels_ right. (Because our dual-process brains make this impossible.) Faced with this failure, one option is to keep trying. (See above.) Another option is to give up-not on finding a metamorality, but on finding a metamorality that feels right. (My suggestion.) And a third option is to just give up entirely on the Enlightenment project, to say that morality is complicated, that it can't be codified in any explicit set of principles, and that the best one can do is hone one's moral sensibilities through practice, modeling oneself on others who seem to be doing a good job. ## Sean Caroll interview Should EA adopt and push Josh Greens deep pragmatism rebrand of utilitarianism? PH thoughts: aim for impartiality = maximum cooperativeness. Me va us, us vs them. Morality = Natural phenomena, emerges from evolution, need to cooperate. Start with practical problem. Morality = help individuals cooperate Metamorality = help groups cooperate Modern ethics is mostly focused on what should our metamorality be? Common currency is required. Best candidate is: experience. Why why why? Comes down to quality of someone's experience. Combine that with impartiality / equal treatment of interest. Sometimes we value actions directly, we can't say why. Sometimes we value them via theory. Habit Vs reflection -- in valuation, but also in thought. Book in a nutshell: every day life follow your intuitions and don't over think it. When it comes to issues that divide groups, we need to step back from intuitions.