Inbox:
- [AK on Guive Assadi on AI](https://80000hours.slack.com/archives/C03AFCY0C/p1747076620041949)
- https://archive.ph/BxMfk NY Times profile
- Video on prediction markets https://www.youtube.com/watch?v=4yZKGbq1YmA
- https://www.reddit.com/r/slatestarcodex/comments/3sjtar/a_robin_hanson_primer/
- what has Hanson written on meta ethics?
- Age of em
- value drift
- Marginal charity.
- How Does Society Identify Experts and When Does It Work? http://vimeo.com/7336217
- https://www.elephantinthebrain.com/outline.html
- https://forum.effectivealtruism.org/posts/tK4XbwpnW43ENbLfu/robin-hanson-on-the-long-reflection -- theory vs practice again
Hanson: we're into diversity about things that don't matter. But we punish non conformity very heavily on things we think matter. Cronica vs ashes into space.
Diversity debates are religious wars with particular positions, not general preference for diversity.
---
**Dark forest theory:** all alien civs are either hiding or already destroyed. Civs want to expand and resources are limited.
Distinct civs are going to have a very hard time establishing trust. So they will either try to destroy technologically inferior civs or hide from superior civs.
## Hanson & Roko Mijic
Speed of change, analogy to companies (human values or not); random selection of AIs vs selective breeding; speciesism; ems as solution
Hanson thinks companies just act in their interests me aren't much constrained by human values. I think that's wrong!
Roko pushes back: "that whole system which manages outcomes for us is heavily laden with human values because every single person is a human."
Last couple centuries optimisation power of companies has gotten stronger. Things have become better on net. Why would this stop?
Robin Hanson - 01:07:29 Our world today, again, I'd say is dominated by large organizations who do not have human values.
Roko Mijic - 01:07:36 I would say that's a crux of disagreement. I think they do not have human values, but they are constrained by the system which overall has human values.
## Not unreasonable interview 2
My usual first tool of analysis is competition and selection.
Powerful principle: to predict what rich creatures do you need to know what they want. To predict what poor creatures do, you just need to know what they need to do to survive.
---
[Looking back through history it is clear that] humanity has not been driving the train. There has been this train of progress or change and it has been a big fast train, especially lately, and it is making enormous changes all through the world but it is not what we would choose if we sat down and disucsssed it or voted. We just don't have a process for doing that. Whatever processes that changed things in the past, and did not occur through our approval or explicit analysis or consent, will continue. So I can use those processes to predict what will happen. I am assuming we will continue to have a world with many actions being taken for local reasons as they usually were. But that's a way to challenge my [Age of Em] hypothesis: you can say no, we will between now and then acquire an ability to foresee the consequences of such changes and to talk together and to vote together on do we want it, and we will have the ability to implement such choices and that will be a change in the future that will prevent the age of Em.
## Conversation with Tyler
The question is, what cues are we using? What are we looking at in the environment that we are using as cues to make this sort of behavior? I actually think evolution did a pretty decent job of noticing basic functional things in the environment. Is there a potential mate? Is this expensive? Is this cheap? Is this difficult to do?
We’re using those sorts of cues and mapping onto our evolved instincts in that way. I think art is roughly impressive things that are hard to do and that don’t seem to have much other function or purpose. Our evolutionary cues for that are pretty good.
...
**COWEN:** What offends you deep down? You see it out there. What offends you?
**HANSON:** It offends me when the things I try to do for high motives, other people pretend to do and get just as much credit as me.
[laughter]
**HANSON:** That’s relatively selfish and personal, but that’s more plausibly where my emotions would be. Yes. I see myself as trying to be an intellectual who looks at the difficult questions, the deep questions, grapples with them, focuses on coming up with hidden but powerful explanations, and then looks for reforms, institutions, mechanisms we could use to make things better.
That’s a noble cause in my mind. There are many people out there who other people are giving credit to them for doing that sort of thing, and I don’t think they deserve it because they’re not actually doing it.
That’s not my broad scope for all morality, but if you want to pick a thing that just makes me mad . . .
...
We like people, in case it’s not clear. Pretty much every other living creature on Earth is much less admirable and interesting than humans. Humans are where it’s at. Humans are the people you want to talk to, you want to interact with, you want to form relationships with. Humans are great.
Humans aren’t what they pretend to be. [laughs] But what they actually are is spectacular
...
**I do think learning philosophy is useful mainly because it inoculates you against other philosophy, and there is a lot of philosophy loose in the world.** Unless you can find a world where you won’t be exposed to it later, you may find it in your interest to be exposed to it on purpose early so that you are inoculated.
## World Government Risks Collective Suicice
https://www.overcomingbias.com/2018/11/world-government-risks-collective-suicide.html
But, alas, central power risks central suicide, either done directly on purpose or as an indirect consequence of other broken thinking. In contrast, in a sufficiently decentralized world when one power commits suicide, its place and resources tend to be taken by other powers who have not committed suicide. Competition and selection is a robust long-term solution to suicide, in a way that centralized governance is not.
This is my tentative best guess for the largest [future filter](https://www.overcomingbias.com/2018/05/two-types-of-future-filters.html) that we face, and that other alien civilizations have faced. The temptation to form central governments and other governance mechanisms is strong, to solve immediate coordination problems, to help powerful interests gain advantages via the capture of such central powers, and to sake the ambition thirst of those who would lead such powers. Over long periods this will seem to have been a wise choice, until suicide ends it all and no one is left to say “I told you so.”
## Too little coordination vs too much coordination
https://www.overcomingbias.com/2018/05/two-types-of-future-filters.html
It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.
And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.
## Minds Almost Meeting with [[=Agnes Callard]]
###### Robin:
> Right. So, in some sense, all explanations are disturbing
...
> I'm not sure how we can find explanations are anything more reassuring to people. Other than you made it up yourself, and nobody's ever seen this before, and what a great innovation you pulled off.
###### Agnes:
> I mean, so that's interesting like that in and of itself, right, is kind of interesting, that, in effect, I think, it's that **people think of themselves as authorities on themselves**. So, it's relevant here that we're talking about explanations for people's behavior, and desire and dispositions and all of that, right.
###### Robin:
> Yeah.
###### Agnes:
> So, I think if you were like giving me an explanation for why, like the leaves are falling in that way, I don't feel similarly invaded, right?
...
it's not just that, we would like to flatter ourselves by thinking of ourselves as creators of, authors of, in a gentle relation over our purposes. It's that we are very insistent that other people see themselves that way too. Because we otherwise it's not obvious how we can hold one another morally responsible.
---
People in a society see that they have these purposes. And they say, "Well, that's just my purpose because I'm in the society, it seems obvious." And you don't realize. "No, your society had an evolutionary past. It went through a series of societies that had different norms at different times. And this heritage shaped your society's moral norms. And therefore, maybe you should accept that you think this is just morally wrong, but in fact, some larger calculation was saying this was the most cost-effective way to produce better behavior."
---
###### Agnes:
> In a way, what the traditionalists are doing is that they are trying to hold on to a distinctive culture, right?
###### Robin:
> Yup.
###### Agnes:
> And you might think that without them and without some of that distinctive traditional culture, there wouldn’t be anything that counted as victory. Like our culture winning – if our culture doesn’t have its own language, its own literature, it’s own cultural positions or whatever, then it doesn’t seem like there’s anything we could be winning in winning, right? And so, don’t these two bad guys and good guys kind of need each other in that part of what it is for like, you are kind of in the extreme of like kind of trans humanist, let’s let go of everything about us just like – **it seems like that is allowing for us in a way to be maximally similar to the other aliens and to be losing what might be distinctive about us, which is what would be necessary to have the relevant victory.**
###### Robin:
> So in our last podcast, we talked about related issue of cultural evolution so we talked about for example, if you were just going to push pink and purple as a cultural thing to spread, but that was going to come at the expense of the other cultural mechanisms and resources and what allowed you to push the pink and purple and those other cultural mechanisms would be suppressed in terms of evolutionary selection. That is evolution, a package of cultural units that together promotes the whole package. And so, merely holding on to any one element would be a losing strategy if it’s not packaged together with other elements that could help it win out in the long run. So that is exactly the question, so for example, if a species, I don’t know, has a feature and evolution would say, drop the feature and you’re better off, but you say, “feature is of my identity, I don’t want to drop the feature,” then you would be losing out. And so, this is the thing many firms face, for example, they have a sort of corporate culture and they have a set of products and they have to decide how flexible to be to adapt without losing everything that they are. But it is a basic trade-off. But it’s clear that going to either extreme seems to be worse than something in the middle. So …
###### Agnes:
> OK. But supposed, humanity comes to you and they’re like, “Robin, we are convinced by this. We are going to innovate. We are going to expand. We are going to do everything you say to be – to try and become grabby aliens. But we also want to preserve our humanness to the extent of their being some victory there at the end of the day. What should we preserve about ourselves?”
###### Robin:
> So, I’ve given that some thought. And I’ve written some blog posts on it on the topic of legacies.
###### Agnes:
> OK.
###### Robin:
> So legacies are the sort of thing that you could hope to have last a long time. And so, we might look at the past and say, “Which are the things that have been able to last the longest in the face of evolutionary pressure?” So they tend to be things that sort of get frozen in a point and then last. So for example, locations of cities. Most cities in the modern era could be in many other places but once they are in one place then that place will stay because people want to be near the other people there. So many cities are, say at, where rivers come together which was once a good place for shipping but that reason no longer is relevant. But nevertheless, the cities stay in the same place. Similarly, we could think about, say, computer languages. In computer languages, you have some freedom of choices but then a particular language just gets frozen in because lots of people use it. Similarly, English may well be a legacy. It might be the world will just use English for a long time. There’s certain kind of computer languages that the world may just use for a long time because once everybody starts using something, it becomes this coordination point and it’s hard to switch. And so, we do know a lot about which sorts of things are likely to just get frozen in and stay as a feature and which things will switch. And I’ve given that a lot of thought even to the structure of our minds because I think we can look at the structure of our minds and see which things would last as legacies and be hard to change and which things would more likely be able to change and that we should be more flexible about that. And that is something we can anticipate by thinking about systems. And so, that gives us a sense of the things that if we hold on to those, it would not be so expensive, that would not really come at the expense of being able to be competitive or win out against other competitors, the things that are sort of naturally legacies.
###### Agnes:
> So – and you answered a slightly different question from the one I asked. I said like which things should we pick? But I think that was – which are the things about us. But what you said is, well, there are some things about us that are going to be less expensive to keep. And so, in effect, your eye is on like keeping something but as cheaply as possible so as to maximize our chances for …
###### Robin:
> Lasting.
###### Agnes:
> … for lasting, right. It’s like there’s a trade-off between our chances for lasting and the worth of lasting, right?
###### Robin:
> I agree.
###### Agnes:
> Is there any value to lasting? And you really – even with the trade-off, you want to lean on the side of – because it’s like if we – suppose we keep where the city – I mean who cares where the cities are, like that doesn’t seem that important if you told me like OK, we are going to keep Homer or Shakespeare, I’m like OK, maybe Homer/Shakespeare plays though, like those are things that are genuinely good and if that’s going to be the human brand, I’m kind of into that. But if you’re like, the human brand is the cities are located in these places and English, which is not even our best language by a long shot, then I’m like, “Well, maybe the aliens got something better going than the cities being located in those places and English.”
###### Robin:
> So for example, there are different ways that social species can coordinate and bond with each other. And so for example, we have a certain kind of love that maybe other social species don’t have. And maybe that kind of love could be a legacy.
###### Agnes:
> Maybe she is going to go for that. [Laughs] Let’s throw in the love bomb. [Laughs]
###### Robin:
> Well, I mean – but it’s an open question. That is, it might be that love is in fact easy to displace as a kind of bonding mechanism, in which case, it would be much more expensive to keep it. But in which case, I might say give up. But that was at least the sort of thing that might be high enough in your mind that would be worth trying to keep as opposed to the locations of cities.
###### Agnes:
> On the assumptions that the other aliens don’t have it, right?
###### Robin:
> For example, yes. Or that maybe, if only 10% of alien species, the 10% with love, and like bond with each other.
## Long Legacies And Fights In An Uncaring Universe
https://www.overcomingbias.com/2018/10/long-legacies-and-fights-in-an-uncaring-universe.html
Most random actions fail badly at this goal. That is, most parameters are tied to some sort of physical, biological, or social equilibrium, where if you move a parameter away from its current setting, the world tends to push it back.
There is, however, one robust way to have a big influence on the distant future: [speed](http://reflectivedisequilibrium.blogspot.com/2018/10/flow-through-effects-of-innovation.html) [up](http://reflectivedisequilibrium.blogspot.com/2018/10/flow-through-effects-of-saving-life.html) or slow down innovation and growth. The extreme version of this preventing or causing extinction; while quite hard to do, this has enormous impact. Setting that aside, as the world economy grows exponentially, any small change to its current level is magnified over time.
[...]
By speeding up growth, you can prevent the waste all the negentropy that is and will continue to be destroyed until our descendants managed to wrest control of such processes.
Alas making roughly the same future happen sooner versus later doesn’t engage most people emotionally; they are much more interested in joining a “fight” over what character the future will take at any give size.
## Long legacies and fights in a competitive universe
https://www.overcomingbias.com/2018/11/long-legacies-and-fights-in-a-competitive-universe.html
Other related evidence include having the time when a firm builds a new HQ be a good time to sell their stock, futurists typically do badly at predicting important events even a few decades into the future, and the “rags to riches to rags in three generations” pattern whereby individuals who find ways to grow wealth don’t pass such habits on to their grandchildren.
2) **Bad Human Reasoning** – While humans are impressive actors when they can use trial and error to hone behaviors, their ability to reason abstractly but reliably to construct useful long term plans is terrible. Because of agency failures, cognitive biases, incentives to show off, excess far views, overconfidence, or something else, alliances learned long ago not to trust to human long term plans, or to accumulations of resources that humans could steal. Alliances have traditionally invested in proselytizing, fertility, prestige, and war because those gains are harder for agents to mismanage or steal via theft and big bad plans.
## Morality Is Overrated
https://www.overcomingbias.com/2008/03/unwanted-morali.html
What we humans want is policy that considers our wants overall, without giving excess weight to morality. So we want policy advisors, like economists, who suggest actions that better get us what we want, even if those actions are immoral. We do not want to just do what we should, but we instead want to achieve all our ends, including immoral and amoral ends. So we mostly do not want to just do what moral philosophers suggest.
Unfortunately, all this is clouded by our tendency to want to appear to care more about morality than we actually do. We want to take the moral high ground and be seen as supporting highly moral policies, even if we don’t actually want those policies implemented. So we publicly support moral policies when our support seems unlikely to change the outcome. But it is amoral advisors, like economists, who help us the most.
**Bottom line:** We want to get what we want, not just do what we should, and so we want advisors like economists who tell us how to get what we want. But we’d rather be seen as following advisors like moral philosophers who tell us to do what we should.
## This is the dream time
https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html
In the distant future, our descendants will probably have spread out across space, and redesigned their minds and bodies to explode Cambrian-style into a vast space of possible creatures. If they are free enough to choose where to go and what to become, our distant descendants will fragment into diverse local economies and cultures.
Given a similar freedom of fertility, most of our distant descendants will also live near a subsistence level. Per-capita wealth has only been rising lately because income has grown faster than population. But if income only doubled every century, in a million years that would be a factor of 10^3000, which seems impossible to achieve with only the 10^70 atoms of our galaxy available by then. Yes we have seen a remarkable demographic transition, wherein richer nations have fewer kids, but we already see contrarian subgroups like Hutterites, Hmongs, or Mormons that grow much faster. So unless strong central controls prevent it, over the long run such groups will easily grow faster than the economy, making per person income drop to near subsistence levels. Even so, they will be basically happy in such a world.
[...]
When our distant descendants think about our era, however, differences will loom larger. Yes they will see that we were more like them in knowing more things, and in having less contact with a wild nature. But our brief period of very rapid growth and discovery and our globally integrated economy and culture will be quite foreign to them. Yet even these differences will pale relative to one huge difference: our lives are far more dominated by consequential delusions: wildly false beliefs and nonadaptive values that matter. While our descendants may explore delusion-dominated virtual realities, they will well understand that such things cannot be real, and don’t much influence history. In contrast, we live in the brief but important “dreamtime” when delusions drove history. Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialed to the max.
[...]
These factors combine to make our era the most consistently and consequentially deluded and unadaptive of any era ever. When they remember us, our distant descendants will be shake their heads at the demographic transition, where we each took far less than full advantage of the reproductive opportunities our wealth offered. They will note how we instead spent our wealth to buy products we saw in ads that talked mostly about the sort of folks who buy them. They will lament our obsession with super-stimili that highjacked our evolved heuristics to give us taste without nutrition. They will note we spent vast sums on things that didn’t actually help on the margin, such as on medicine that didn’t make us healthier, or education that didn’t make us more productive.
[...]
Perhaps most important, our descendants may remember how history hung by a precarious thread on a few crucial coordination choices that our highly integrated rapidly changing world did or might have allowed us to achieve, and the strange delusions that influenced such choices. These choices might have been about global warming, rampaging robots, nuclear weapons, bioterror, etc. Our delusions may have led us to do something quite wonderful, or quite horrible, that permanently changed the options available to our descendants. This would be the most lasting legacy of this, our explosively growing dream time, when what was once adaptive behavior with mostly harmless delusions become strange and dreamy unadaptive behavior, before adaptation again reasserted a clear-headed relation between behavior and reality. Our dreamtime will be a time of legend, a favorite setting for grand fiction, when low-delusion heroes and the strange rich clowns around them could most plausibly have changed the course of history. Perhaps most dramatic will be tragedies about dreamtime advocates who could foresee and were horrified by the coming slow stable adaptive eons, and tried passionately, but unsuccessfully, to prevent them.
{{Long run, evolution will have the final word}}
### Comments
Eliezer Yudkowsky • [12 years ago](https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html#comment-518296978 "Tuesday, September 29, 2009 11:27 PM")
> Perhaps most dramatic will be tragedies about dreamtime advocates who could foresee and were horrified by the coming slow stable adaptive eons, and tried passionately, but unsuccessfully, to prevent them.
Yeah. I guess I don't ultimately understand the psychology that can write that and not fight fanatically to the last breath to prevent the dark vision from coming to pass.
How awful would things have to be before you would fight to stop it? Before you would do more than sigh in resignation? If no one were ever happy or sad, if no one ever again told a story or bothered to imagine that things could have been different, would that be awful enough?
Are the people who try and change the future, people who you are not comfortable affiliating yourself with? Is it not the "role" that you play in your vision of your life? Or is it really that the will to protect is so rare in a human being?
Robin Hanson [Eliezer Yudkowsky](https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html#comment-518296978) • [12 years ago](https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html#comment-518297048 "Tuesday, September 29, 2009 11:43 PM")
This vision really isn’t that dark for me. It may not be as bright as the unicorns and fairies that fill dream-time visions, but within the range of what seems actually feasible, I’d call it at least 90% of the way from immediate extinction to the very best possible.
Carl Shulman [Robin Hanson](https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html#comment-518297048) • [12 years ago](https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html#comment-518297344 "Wednesday, September 30, 2009 10:49 AM")
I see a worrying pattern here. Robin thinks the hyper-Malthusian scenario is amazingly great and that efforts to globally coordinate to prevent it (and the huge deadweight losses of burning the commons, as well as vast lost opportunities for existing beings) will very probably fail. Others, such as James Hughes and Eliezer and myself, see the Malthusian competitive scenario as disastrous and also think that humans or posthumans will invest extensive efforts (including the social control tech enabled by AI/brain emulations) to avoid the associated losses in favor of a cooperative/singleton scenario, with highish likelihood of success.
It almost seems as though we are modeling the motives of future beings with the option of working to produce global coordination simply by generalizing from our own valuations of the Malthusian scenario.
Jason Malloy • [12 years ago](https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html#comment-518296788 "Tuesday, September 29, 2009 6:31 PM")
The idea here is essentially that the demographic transition fun time is probably close to an end, as natural selection reasserts itself and we all get back to breeding ourselves into poverty.
But once we get top-down control of human nature through genetic engineering, it's likely that we can retain whatever sociological equilibrium seems attractive at that time on into (relative) perpetuity.
With genetics as the fastest moving science, it seems like an unwise time to predict that natural selection is increasingly going to control humans, rather than that humans are increasingly going to control natural selection.
Anders Sandberg [Cyan](https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html#comment-518726639) • [12 years ago](https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html#comment-518726660 "Wednesday, September 30, 2009 3:11 AM")
Brian, the fact that our descendants will will likely have utterly different values will still not impair our ability to make predictions about the low-level constraints on their behaviour. The observer of the nucleic acid ocean would be able to predict that in three billion years time there would still be competition for nucleotide raw material [*], they would just not be able to say what kinds of conglomerates would be doing the competition and how they would be going about it. But they would likely be able to predict with a high confidence that the era of replicators having a large soup of free nucleotides to use would be drawing to a close, never to re-occur.
Maybe our subsistence level descendants are like our poor nucleic acids, struggling to survive within the ~10^30 cells of the biosphere while pretty oblivious to how the amazing creatures built on top of them enjoy life.
[*] There would also be a certain finite probability that the competition ending by the destruction of all nucleic acids on the planet due to some external or internal disaster (life becoming intelligent and going postbiological might be lumped in here as a very weird and perhaps unlikely kind of internal disaster for the nucleic acids).
## Dreamtime social games
https://www.overcomingbias.com/2019/09/dreamtime-games.html
Paying more for results would feel to most people like having to invite less suave and lower class engineers or apartment sups to your swanky parties because they are useful as associates. Or having to switch from dating hip hunky Tinder dudes to reliable practical guys with steady jobs. In status terms, that all feels less like admiring prestige and more like submitting to domination, which is a forager no-no. Paying for results is the sort of thing that poor practical people have to do, not rich prestigious folks like you.
Of course our society is full of social situations where practical people get enough rewards to keep them doing practical things. So that the world actually works. People sometimes try to kill such things, but then they suffer badly and learn to stop. But most folks who express interest in social reforms seem to care more about projecting their grand hopes and ideals, relative to making stuff work better. Strong emotional support for efficiency-driven reform must come from those who have deeply felt the sting of inefficiency.
## How our era is unique
https://www.overcomingbias.com/2009/09/how-is-our-era-unique.html
Assumptions I share: our lineage probably won’t go extinct, we’ll keep growing, spread across space, redesign our minds and bodies, and eventually learn all tech, all within a mostly competitive framework.
## Callard & Hanson future generations
###### Robin:
> I do, but I’m not sure how strong it is. And that’s, like, a really deep, interesting question. So— And this is related to your stuff about altruism. So I know a lot of people in this “effective altruism” movement, and they are really tied emotionally to this concept of altruism. And their concept of altruism is a pretty broad, unspecific target of altruism, like what you were thinking. And, you know, something I said in a talk at an event once was basically that, Look, so far in history, the main way anybody has ever influenced the future is by having descendants. Overwhelmingly, the most influence on the future has gone through that channel. And that means you should consider if you want to have an influence on the future, thinking about using that channel.
> And the influence of having descendants is tied to the idea of, like, having an allegiance or an affiliation, right? So even think about nations in the world today, right? You might say, I want the world to do better. And then you could, like, be supporting the United Nations, or various multinational organizations. Or you could say, Well, I’m going to affiliate myself with my country. And my country, like, has a military institution, and I’ll help them, or it has a research arm, and I’m going to, like, make an alliance with other people in my country where we’re going to help our country help the future. And you might think, Well that’s not as altruistic. Right? You’re not trying to help everybody, you’re trying to help your thing.
> But I say— But evolution, cultural and genetic, is this process by which things help themselves. And you know, that’s the main way all influence has happened. And I worry that if you create these communities’ organizations that create this habit of just trying to help everybody, those things don’t survive evolutionary pressures, cultural or genetic. They would go away, right? That is, the habit of just helping the world indiscriminately might just not have heritage, might not have descendants in a way that helping your country, your community, even your ethnicity, your family, your profession could.
###### Agnes:
> Yeah, so as you were talking I was sort of thinking— I sort of had this flash of like, How would Aristotle see this idea of helping people who were not in your family? Right? And just kind of, sort of devoting your life and even sacrificing yourself and the goods of your life to helping them? I think he would call that slavery, because I think he— That’s what he thought a slave was, is somebody who the goal of their life is the happiness of another person. And I think what he would have said is, like, having slaves that are sort of the slaves of everyone is not a very effective way to have slavery. That is, a slave has to belong, like, to a particular community and to a particular person so that—
###### Robin:
> Will they take care of them, instruct them? You know, develop them?
###### Agnes:
> Who can specifically, who can give them instruction as to how to help them, right? And so what you might think is, like, the, you know, the effective altruists would like to be slaves of everybody. But that’s not a kind of coherent beneficence project. Because it’s actually hard to know what is good for someone else.
[…]
###### Robin:
> That’s right. But the thing— I mean, I tend to sort of come back to sort of long-term processes. And I do tend to think natural selection, or selection, will just be a continuing force for a long time. And the main alternative is governance. And so **I actually think one of the main choices that we will have, and the future will have, is the choice between allowing competition and then replacing with governance.** And both of them have downsides and risks.
> I mean, obviously, competition has this risk that the things we value will be competed away. So I even have, you know, a colleague, Nick Bostrom, who has an essay about imagining that consciousness would be evolved away, right? We’re not sure where it comes from or why it’s there. So competition might decide that it could do without it, right? And then we just have all of these, you know, they say, “Disneyland without the children.”
...
If we have big, say, problems like global warming, and we don’t have world governance to solve them, then we end up realizing those problems, and that’s expensive, right? Or, say, war: We don’t have a world government to stop war, then we keep having wars. Which is expensive and damaging, right?
On the other hand, if we do choose a world government, and then it entrenches itself, and then it becomes this big bloated parasite that, say, limits free expression, limits innovation, limits growth—then, like, it could prevent the growth and innovation that would have allowed us to meet aliens on their terms.
---
And I think, well, maybe here’s the thing: Humanity really is a pyramid scheme, in that we’re all in some sense predicating the value of our own lives, and what we’re doing, on something that actually can’t underwrite that value. So we’re sort of writing these empty checks. But if we’re far enough away from that event, we can deceive ourselves about that, and not make it apparent that that’s what we’re doing. And that’s why they prefer the longer civilization.
I think altruism is a pyramid scheme. And it’s just a big mistake to think that the altruistic life is, like, a good or even coherent life. It’s an interesting fact that if you look at, like, a philosopher like Aristotle, who early on in his Ethics, he considers a variety of lives. And he’s like, is this life a good life? Is this— He’s like, which is the best life? You know, so he considers a life devoted to bodily pleasure, he considers a life devoted to honor, he considers a life devoted to making money. The life devoted to virtue.
He dismisses all of those as not being the best life. He doesn’t even consider the life devoted to helping others. It doesn’t even show up for him. And I think the reason is, like, it’s obviously a pyramid scheme, right? That is, it’s obvious that, in some sense, the meaning of your life, right, would then be— in a sense, you’ve shifted the bump in the rug onto other people, and then if they’re also altruists… Right?
And I think maybe the most basic, like, I don’t know, premise, or dogma of, like, of ancient ethics, which is sometimes called “eudaimonism,” right—but it’s shared by, you know, Plato and Aristotle—is that, like, the good of your life, whatever it is, is something that has to come home to you. Like, it can’t be located in another person’s— Your happiness can’t be located in another person’s life.
## Luke Muelheuser MIRI interview
https://intelligence.org/2013/11/01/robin-hanson/
LM: One hunch is that, for example, if someone is raised in a Popperian paradigm, as opposed to maybe somebody younger who was raised in a Bayesian paradigm, the Popperian will have a strong falsificationist mindset, and because you don’t get to falsify hypotheses about the future until the future comes, these kinds of people will be more skeptical of the idea that you can learn things about the future.
Or in the risk analysis community, there’s a tradition there that’s being trained in the idea that there is risk, which is something that you can attach a probability to, and then there’s uncertainty, which is something that you don’t know enough about to even attach a probability to. A lot of the things that are decades away would fall into that latter category. Whereas for me, as a Bayesian, uncertainty just collapses into risk. Because of this, maybe I’m more willing to try to think hard about the future.
RH: say you’re running a business, and you have some competitors, and you’re trying to decide where will your field go in the next few years, or what kind of products will people like, or you’re running a social organization, and you’re trying to decide how to change your strategy.
Another example: you have some history, and you’re trying to go back and figure out what were your grandfathers doing, or just almost all random questions people might ask about the world. The Popperian stuff doesn’t help at all. It’s completely useless. If you just had any sort of habit of dealing with real problems in the world, you would have developed a tolerance for expecting things not to be provable or falsifiable.
**Luke**: Robin, you used this term “serious futurism,” which happens to be the term I’ve been using for futurists who are trying to figure it out as opposed to meet the demand for morality tales about the future, or meet a demand for hype that fuels excited talk about, “Gee whiz, cool stuff from the future,” etc.
When I try to do serious futurism, most of the sources I encounter are not trying to meet the demand of figuring out what’s true about the future. I have to weed through a lot of material that’s meeting other demands, before I find anything that’s useful to my project of serious futurism.
## Foresight interview
Laughter is a play signal. When one gets hurt, need to know if we're still playing or if we're fighting. Laughter is a signal that we're still pla ying.
Conversation: showing off backpack of interesting things to say. Improvisation. You're supposed to be able to say something interesting no matter what comes up. It's about showing off.
Superbowl ads cost more, not just in total but more per person. Even people who don't buy the product need to know what it means, so that you can use the product to communicate what kind of person you are. E.g. expensive car or watch.
Religious people do better than non religious on almost every measure. Surprising if you think they're gullible enough to acquire false beliefs about God, religious documents, etc.
Politics: you have very little influence on world but you have big influence on how people around you see you.
Political loyalty is the fear that unless you show sufficient loyalty to people around you, they will punish you.
If anyone sees you violating a rule, they are supposed to do something about it, and if they don't, that's violating the rule. Rules are enforced by checking on whether other people are following the rules. Many norms are in terms of motives, if I hit you deliberately that's no good, if I hit you accidentally that's ok. It's therefore very important to us to be able to push a good story about our motives. Actually it was so important that your brains are the largest of any animal in proportion to your body size because of your social world... a bit part of your social world is managing the possibility you might be accused of a norm violation. You're constantly looking at what you're doing and asking what's a good motive I could attribute to this. If someone were to challenge me what could I say about why I was doing it. That's so important to you that the conscious part of your mind is the part of you that's in charge of that. The conscious mind is not the king, but the press secretary. You don't really know why you do things, that's not your job. Your job is to make up a good reason, a coherent reason that makes sense of what you're doing and avoids the idea that you are violating norms. Many true motives are at risk of violating norms even if they are good things. E.g. you're not supposed to brag. You're not supposed to have subgroup coalitions.
Policy ppl consistently accept our face value motivations.
Policy proposals should let people continue to pretend we're giving them what they want, while giving people what they actually want.
Value drift has been going on forever.
In the past, when value drift happened, change was so slow you didn't see much in your lifetime so you didn't worry about it too much. #todo
Prediction market is very accurate just blabs whatever it thinks, doesn't take political factors into account. A bit like a smart autistic.
In general CEOs and kings and autocrats are much more constrained by their political environment than people realise. You can only stay at the top if you have a coalition backing you, your selectorate.
Question: Two models
1. Press sec and puppeteer behind scenes.
2. Press sec and no puppeteer, more random and fleeting, evolutionary.
RH: not sure. If locus is in individual mental machinations Vs cultural inheritance.
Laughing together builds trust.
## Philosophy of Hypocrisy
https://www.overcomingbias.com/2011/05/philosophy-of-hypocrisy.html
The homo hypocritus hypothesis suggests that people will often find themselves having strong intuitions that it is moral for them to quietly evade the usual rules, while still advocating such rules for others. When could such intuitions offer strong support for the claim that such hypocrisy is in fact moral?
The issue here isn’t whether lies might _ever_ be moral, such as with the proverbial lie to save Jews from the Nazis. The issue here is examples such that of Sidgwick’s socially-convenient lies on sex and religion, which gained him social support and prestige. What fraction of moral philosophers privately support that type of hypocrisy? How could we know?
## How to Torture a Reluctant Disagreer
https://www.overcomingbias.com/2007/08/how-to-torture.html
https://marginalrevolution.com/marginalrevolution/2007/07/assorted-link-2.html
[[=Tyler Cowen]]:
In some ways I think of the whole book as an (attempted) rebuttal to Robin. Robin is the rational constructivist, the logical atomist, the reductionist, and the extreme Darwinian. The Inner Economist is trying to reconcile (modified) economic reasoning and a (modified) version of common sense morality. …
Imagine an intellectual war with Darwin, Fourier, Comte, early Carnap, David Friedman and millenarian Christian eschatology on one side (that’s my mental image of how Robin maps into the history of ideas), with bits from Henry Sidgwick, Hayek, Quine, and William James on the other side, … I am (implicitly) defending gradualism, pluralism, the partial irreduciblity of individual choice, the primacy of civilization, and yes also a certain degree of social artifice. …
Note that Robin is wrong to suggest I don’t reply to his views. I paint him as engaged in a subjective quest — including on bias — rather than standing from an Archimedean point. And within the realm of subjective quests, I try to outline a superior one, especially in the last few chapters of the book. He doesn’t like being relativized in this fashion, and that he doesn’t see me as replying to him is itself an indicator of our underlying differences.
#todo -- find these links
There followed alternating comments by Tyler, [me](http://www.typepad.com/t/comments?__mode=red&user_id=3576&id=77765828), Tyler, [me](http://www.typepad.com/t/comments?__mode=red&user_id=3576&id=77765828), Tyler, and [me](http://www.typepad.com/t/comments?__mode=red&user_id=3576&id=77768632), but in my view he never clarified our disagreement.
## Age of Em
### Introduction
Everyone without exception believes his own native customs, and the religion he was brought up in, to be the best. (Herodotus 440bc)
The future is not the realization of our hopes and dreams, a warning to mend our ways, an adventure to inspire us, nor a romance to touch our hearts. The future is just another place in space-time. Its residents, like us, find their world mundane and morally ambiguous. (Hanson 2008a)
...
Yes, you admit that lacking your wealth your ancestors couldn’t copy some of your habits. Even so, you tend to think that humanity has learned that your ways are better. That is, you believe in social and moral progress. The problem is, the future will probably hold new kinds of people. Your descendants’ habits and attitudes are likely to differ from yours by as much as yours differ from your ancestors. If you understood just how different your ancestors were, you’d realize that you should expect your descendants to seem quite strange.
...
New habits and attitudes result less than you think from moral progress, and more from people adapting to new situations.
Also, you likely won’t be able to easily categorize many future ways as either good or evil; they will instead just seem weird. After all, your world hardly fits the morality tales your distant ancestors told; to them you’d just seem weird. Complex realities frustrate simple summaries, and don’t fit simple morality tales.
## Wait to marry a cause
https://www.overcomingbias.com/2022/02/wait-to-marry-a-cause.html
Given all this, I am here to suggest that you wait longer to pick your causes, be they political, social, religious, justice, charity, etc. You really don’t know enough to choose well when you are young, and there isn’t that much that will go wrong if you wait to choose. Instead of spending money, time, and energy on causes when you are young, you can instead invest those in your family, career, etc., where they can offer big returns, giving you more to spend on your causes later on.
Yes, in worlds where most everyone gets married young, it was hard to wait. Even so, many were often advised to wait. Today, there are many social pressures to get young people to pick causes early. And yes, it can be hard to resist these pressures. Even so, I say: _wait_ and _date_. You just don’t know enough now, so your younger years are better spent learning and building. Later you will have more time, money, energy, insight, and social connections, all of which will help you to support whatever causes you choose.
So sample and dabble with causes, but wait to marry one. Yes divorce is possible, but that doesn’t mean everyone should marry at age 14. The poor will be with you always. If you rush too fast to help today’s poor you may just mess them all, hurting both today’s and tomorrow’s poor.