## Gentle other
Second species argument.
Assumes interspecies conflict. These new creatures are in competition with you.
Dawkins, as often, is eloquent: The total amount of su ering per year in the natural world is beyond all decent contemplation. During the minute it takes me to compose this sentence, thousands of animals are being eaten alive; others are running for their lives, whimpering with fear; others are being slowly devoured from within by rasping parasites; thousands of all kinds are dying of starvation, thirst and disease.
## The despair of normative realism bot
**Cognitivism:** normative judgments are candidates for truth or falsehood.
**Judgment non-naturalism:** In order for these judgments to be true, there need to be normative facts that are, in some sense, irreducibly “over and above” facts about the natural world.
**Metaphysical non-naturalism:** There are in fact such non-natural facts.
p.2 The connotation is meant to be one of empiricism, concreteness, scientixc respectability, “is” instead of “ought,” “facts” instead of “values,” Richard Dawkins instead of C.S. Lewis, etc — though, perhaps instructively, none of these distinctions are particularly straightforward.
p.3 I’ve always found this type of move somewhat obscure, and my suspicion is that the relevant needle cannot be stably threaded — that one should end, ultimately, with naturalism, or something more than naturalism, but not with, as it were, both: naturalism about ontology, but non-naturalism about… something else.
p.3 Enoch’s view openly violates what he calls an “and-that’s-it” clause about science
p.3 Our pro tanto reasons to prevent pain are a robustly additional feature of the world; alien scientists whose theories didn’t acknowledge these reasons would be missing some fundamental aspect of objective reality, as real as the speed of light.
p.4 empirically — and, I think, instructively — many seem less natively inclined to think that epistemic reasons require inyationary metaphysics.)
p.4 something like a Parxtian meta-ethic is often taken for granted by many philosophers working primarily in normative ethics, in ways that xlter into adjacent discourses (for example, the discourse in the Effective Altruism community, especially in the UK).
p.4 My current best guess is that Enochian (and also Parxtian) realism is false — though the debate feels slippery.
p.5 Many people, I think, are Enochians/Parxtians because they accept something like judgment non-naturalism — that is, they think non-naturalism is the only way to validate what needs validating about normative judgment and discourse.
p.5 Indeed, non-naturalist realists will sometimes use the threat of nihilism as a kind of positive argument for metaphysical non-naturalism.
p.6 Both arguments amount to something like: it’s my non-naturalist metaphysics, or the void.
I’ve often wondered how many metaphysical positions are motivated, subtly or not-so subtly, by efforts to avoid some void or other; to protect our connection to something we love, some perception of the world that calls out to us in beauty and meaning and importance, from some “it’s just X” associated with naturalism — just atoms, just evolution, just neurons, just computation — where the “just” in question is experienced as somehow draining the world of color, a force for numbness, disorientation, blankness, estrangement.
The culture of analytic ethics is not particularly friendly to expressions of emotions like fear of the void, and many philosophers do not appear disposed to them; but I wonder where they might be in play, maybe in subtle ways, all the same.
(My girlfriend notes that accepting claims implied by the void can also come with a threat of social punishment — perhaps a more potent and psychologically effective threat than the void itself).
## On the limits of idealised values
idealizing subjectivism is, and needs to be, less like “realism-lite,” and more like existentialism, than is sometimes acknowledged. If subjectivists wish to forge, from the tangled facts of actual (and hypothetical) selfhood, an ideal, then they will need, I expect, to make many choices that create, rather than conform. And such choices will be required, I expect, not just as a “last step,” once all the “information” is in place, but rather, even in theory, all along the way. Such choice, indeed, is the very substance of the thing.
## Problems of Evil
https://handsandcities.com/2021/04/19/problems-of-evil/
the problem of evil is about more than metaphysics. Indeed, Lewis dismisses materialism as confidently as ever; Hart sets the question of God’s “existence,” whatever that means, swiftly to the side; Ivan still expects the end of days. The problem of evil shakes them on a different axis — and plausibly, a more important one. It shakes, I think, their _love_ of God, whatever He is. And love, perhaps, is the main thing.
[...]
I think of whether someone believes that Ultimate Reality is in some sense “good” as a much more informative question, spiritually speaking, than whether they believe in God, or that e.g., our Universe was created by something like a person.
[...]
Thus, to see a man suffering in the hospital is one thing; to see, in this suffering, the sickness of our society and our history as a whole, another; and to see in it the poison of being itself, the rot of consciousness, the horrific helplessness of any contingent thing, another yet.
We might call this last one “existential negative”; and we might call Ginsberg’s attitude, above, “existential positive.” Ginsberg looks at skin, nose, cock, and sees not just particular “holy” things, contrasted with “profane” things (part of the point, indeed, is that cocks read as profane), but holiness itself — something everywhere at once, infusing saint and sinner alike, shit and sand and saxophone, skyscrapers and insane asylums, pavement and railroads, the sea, the eyeball, the river of tears.
[...]
“Father love,” for many, is easy to understand. Love, one might think, is an evaluative attitude that one directs towards things with certain properties (namely, lovable ones) and not others. Thus, to warrant love, the child needs to be a particular way. So too with the Real, for the secularist. If the Real, or some part of it, is pretty and nice, great: the secularist will affirm it. But if the Real is something else, the thing to be done is to _reshape_ it until it’s _better_. In this sense, the Real is approached centrally as raw material (here I think of Rob Wiblin’s recent [tweet](https://twitter.com/robertwiblin/status/1382331163050663937): “I’m a spiritual person in that I want to convert all the stars into machines that produce the greatest possible amount of moral value”).
But mother love seems, on its face, more mysterious. What sort of evaluative attitude is unconditional in this way? Indeed, more broadly, relationships of “unconditional love” raise some of the same issues that Ginsberg’s holiness does: that is, they risk negating the sense in which meaningfully positive evaluative attitude should be _responsive_ to the properties of their object (reflecting, for example, when those properties are bad). And one wonders (as the devil wondered about Job) whether the attitude in question is really so unconditional after all.
But is mother love unconditionally positive? Maybe in a sense. But a better word might be: “unconditionally committed” or “unconditionally loyal”. [...] Where the archetypal father might, let us suppose, _give_ _up_ on the child, if some standard is not met, the mother will not. That is, the mother is always, in some sense, loyal to the child; on the child’s team; always, in some sense, caring; paying attention.
[...]
Chesterton, in [Orthodoxy](https://www.amazon.com/Orthodoxy-G-K-Chesterton/dp/0898705525) (chapter 5) talks about loyalty as well, and about loving things _before_ they are lovable:
“My acceptance of the universe is not optimism, it is more like patriotism. It is a matter of primary loyalty. The world is not a lodging-house at Brighton, which we are to leave behind because it is miserable. It is the fortress of our family, with the flag flying on the turret, and the more miserable it is the less we should leave it. The point is not that this world is too sad to love or too glad not to love; the point is that when you do love a thing, its gladness is a reason for loving it, and its sadness a reason for loving it more … What we need is not the cold acceptance of the world as a compromise, but some way in which we can heartily hate and heartily love it. We do not want joy and anger to neutralize each other and produce a surly contentment; we want a fiercer delight and a fiercer discontent.”
## Gus Docker interview
Sometimes the "we have to integrate this with our lives" part of longternism gets lost.
Astro waste type calculations do rightly prompt resistance and suspicion and people sayig ah this looks like the type of idea that would be totalising and kind of extreme and kind of inhuman.
Yeah, I agree. And I think those sorts of calculations do, I think rightly prompt, resistance and suspicion and people saying, "Ah, this looks like the type of idea that is kind of totalizing and extreme and it feels inhuman." There's a lot of, I think, worthy forms of suspicion that that type of argument gives rise to. And I think I encourage people to kind of encountering the idea of longtermism to sit with it. I mean, I don't think we should dismiss, I think the big numbers, it really matters. It's an important piece that the numbers are so big. And I think it's an important lesson of our experience with modern cosmology, that sometimes the numbers are just really big. The universe is just in fact really big and for closely related reasons, our future might be very big.
## The importance of how you weigh it
https://handsandcities.com/2021/03/28/the-importance-of-how-you-weigh-it/
Moral philosophers spend most of their time trying to identify what factors matter to at least some degree, and trying to explain why.
Surprisingly little time is spent writing on how we should weigh different factors.
In practice, the weighting is the crucial thing. And when you bear that in mind, the differences between consequentialist and non-consequentialist theories become less significant. All plausible non-consequentialist theories care about consequences to a significant degree. So they still have a weighing problem, perhaps just a harder one than the consequentialists since it has more variables.
Is there a field here? Well... yeah...?
here? Well... yeah...?
# On future people, looking back at 21st century longtermism
URL: https://handsandcities.com/2021/03/22/on-future-people-looking-back-at-21st-century-longtermism/
----
I imagine our descendants saying: “Yes. You can see it. Don’t look away. Don’t forget. Don’t mess up. The pieces are all there. Go slow. Be careful. It’s really possible.”
----
it feels like Whitman is living, and writing, with future people — including, in some sense, myself — very directly in mind. He’s saying to his readers: I was alive. You too are alive. We are alive together, with mere time as the distance. I am speaking to you. You are listening to me. I am looking at you. You are looking at me.
----
I am with you, you men and women of a generation, or ever so many generations hence, Just as you feel when you look on the river and sky, so I felt, Just as any of you is one of a living crowd, I was one of a crowd, Just as you are refresh’d by the gladness of the river and the bright flow, I was refresh’d, Just as you stand and lean on the rail, yet hurry with the swift current, I stood yet was hurried…
----
That tiny set of some ten billion humans held the whole thing in their hands. And they barely noticed.
----
Sometimes I imagine this as akin to playing backwards the time-lapse growth of an enormous tree, twisting and branching through time and space on cosmic scales — a tree whose leaves fill the firmament with something lush and vast and shining; a tree billions of years old, yet strong and intensely alive; a tree which grew, entirely, from one tiny, fragile seed.
----
epistemic learned helplessness for an example of where heuristics like this might come from. Basically, the idea is: “arguments, they can convince you of any old thing, just don’t go in for them roughly ever.”)
----
because the numbers are so alien and overwhelming that one suspects that any quantitative (and indeed, qualitative) ethical reasoning that takes them as inputs will end up distorted, or totalizing, or inhuman. I think hesitations of this kind are very reasonable.
----
The question is whether future people, much wiser than ourselves, would be able to do something profoundly good on cosmic scales, if given the chance. I think they would. Extrapolating from the best that our current world has to offer provides the merest glimpse of what’s ultimately possible. For me, though, it’s more than enough.
----
breezy talk about what future people might do, especially amongst utilitarian-types, often invokes (whether intentionally or no) a vision of a future that is somehow uniform, cold, metallic, voracious, regimented — a vision, for all its posited “goodness” and “optimality” and “efficiency,” that many feel intuitively repelled by (cf. the idea of “tiling” the universe with something, or of something-tronium — computronium, hedonium, etc).
----
also think that talking about the value of the future in terms of such lives should just be seen as a gesture — an attempt to point, using notions of value we’re at least somewhat familiar with, at the possibility of something profoundly good occurring on cosmic scales, but which we are currently in an extremely poor position to understand or anticipate (see the section on “sublime Utopias” here).
----
## Expected utility maximisation series
Read Joe Carlsmith on expected utility maximisation (parts 1 and 2).
Scenario 1
A: 100% chance of save 1 life.
B: 1% chance of save 1000 lives.
Scenario 2
C: 100% chance save 1 life.
D: Heads you save 5 lives, tails you save 0.
Scenario 1 generates a "B seems better in theory but if I actually choose it, I'll predictably lose" concern.
But if you want to take D in scenario 2, it's hard to find a reason to take A in scenario 1.
It looks like picking A and D is extremely inconsistent—in theory.
> There's nothing special about small probabilities—they're just bigger conditional probabilities in disguise.
Seems like maybe the intuition against D is driven by scope insensitivity, i.e. we're forgetting how much better the positive outcome is.
> If, in the face of a predictable loss, it’s hard to remember that e.g. you value saving a thousand lives a thousand times more than saving one, then you can remember, via coin-yips, that you value saving two twice as much as saving one, saving four twice as much as saving two, and so on.
Can represent probabilities as grid lines on a square. Can represent utilities on in third dimension, so you get a cityscape of probabilities (building width and length) and utilities (building height).
Intuition pump for scenario 1: 1000 people drowning. If you're one of the people drowning, you can have either a 0.1% chance of being the person I save, or a 1% chance of being saved. You want the latter! If I pick the former, it seems like maybe I'm not optimising for helping, but for getting credit for helping. The "safe bet" from my perspective seems worse from the perspective of the drowning group.
Joe doesn't have a theoretical solution for Pascalian stuff and fanatacisim stuff, but maybe bounded utilities is the answer.
Joe's relationship to EUM:
> There's a vibe [...] that's fairly core to my own relationship with EUM: namely, something about understanding your choices as always “taking a stance,” such that having values and beliefs is not some sort of optional thing you can do sometimes, when the world makes it convenient, but rather a thing that you are always doing, with every movement of your mind and body. And with this vibe in mind, I think, it’s easier to get past a conception of EUM as some sort of “tool” you can use to make decisions, when you’re lucky enough to have a probability assignment and a utility function lying around — but which loses relevance otherwise. EUM is not about “probabilities and utilities xrst, decisions second”; nor, even, need it be about “decisions xrst, probabilities and utilities second,” as the “but it’s not action-guiding!” objectors sometimes assume. Rather, it’s about a certain kind of harmony in your overall pattern of decisions — one that can be achieved by getting your probabilities and utilities together xrst, and then xguring out your decisions, but which can also be achieved by making sure your decision-making satisxes certain attractive conditions, and letting the probabilities and utilities yow from there. And in this latter mode, faced with a choice between e.g. X with certainty, vs. Y if heads (and nothing otherwise), one need not look for some independently specixable unit of value to tally up and check whether Y has at least twice as much of it as X. Rather, to choose Y-if-heads, here, just is to decide that Y, to you, is at least twice as valuable as X.
> I emphasize this partly because if – as I did — you turn towards the theorems I’ll discuss hoping to answer questions like “would blah resources be better devoted to existential risk reduction or anti-malarial bednets?”, it’s important to be clear about what sort of answers to expect. There is, in fact, greater clarity to be had, here. But it won’t live your life for you (and certainly, it won’t tell you to accept some particular ethic – e.g., utilitarianism). Ultimately, you need to look directly at the stakes – at the malaria, at the size and value of the future – and at the rest of the situation, however shrouded in uncertainty. Are the stakes high enough? Is success plausible enough? In some brute and basic sense, you just have to decide.
https://handsandcities.com/2022/03/16/on-expected-utility-part-1-skyscrapers-and-madmen/
https://handsandcities.com/2022/03/18/on-expected-utility-part-2-why-it-can-be-ok-to-predictably-lose/