Iason Gabriel FLI interview, which I've posted to [The Valmy](https://thevalmy.com/48).
* Recommender system could serve
* First order preferences/desires: what you want in the moment
* Second order desires: what you reflectively want, what kind of person you want to be, what you want to want
* But it could go broader than just preference satisfaction:
* Things that are in my interest to know
* Cultivate knowledge
* Moral improvement
* Existing field of value centered design field says we need to consult people on values early in the design process.
* Minimalist and maximalist conceptions of AI alignment, technology design.
* Minimalist: avoid bad outcomes, safety
* Maximalist: aim for very best outcomes, from moral point of view. Even if we design safe technology, we might still be leaving a lot of value on the table.
* Lots of preferences that are just so unethical they should not be counted (e.g. Ted Bundy).
* Intention: a partially filled out plan of action that commits us to some end.
* Tension in liberalism: don't coerce, be pluralist, but do need to teach values at some point.
* Revealed preference: I observe you doing A or B, and from that I infer that you have a deeper preference for the thing you choose.
* We can to give people the kinds of things they can aspire to.
* Humean rationality: we can all be perfectly rational but no value convergence.
* Kantian rationality: more substantive, suggests that rationality involves evaluative judgements, there will be convergence between the perfectly rational
* If you dig down, most moral realists tend to hold this notion of rationality
* Moral machine experiment - several million played the game online
* In many parts of the world, people valued rich people's lives more highly than poor people's lives.
* Dilemma: common sense morality is deeply flawed. But we also don't want to coerce people to adopt new values.
* How do we respect pluralism without getting stuck in contemporary morass of prejudicial beliefs that will clearly be condemned by future generations.
* Patient democratic discourse => incremental improvement
* Human needs => human rights
* Need empirical claim + fundamental standing normative claim => human rights
* Worry about levelling down to meet real world conditions. Holding up an ideal star that won't be fully realised may still help get you to a better place.
* Keen on veil of ignorance as vehicle for impartial discovery
* Principle of non-domination
* One true moral theory approach vs procedural approaches to alignment
* Inclusive moral deliberation
* Even Rorty would say wouldn't it be nice
* We may need to impose pluralism on fundamentalists, dominating them
* You are entitled to non domination so long as you are prepared not to dominate other people, to accept that there is a kind of moral equality such that we need to cooperate and cohabit together.
* Problem of domination is bigger for moral realists
* What important questions actually get punted to the long reflection? Surely some stuff we need to decide about sooner.
* Lucas: eradicate suffering, poverty, provide human rights before LR. LR mostly about the one true moral theory question; realism or not; does knowledge matter; what to do with the cosmic endowment.
* Iason: I find it hard to imagine a world in which these huge but to some extent prosaic questions have been addressed and in which we then turn our attention to other things.
* Not feasible to press pause.
* We need productive global deliberation right now
* How much can we punt to the future?
* Lucas: Principle of non domination may not be fundamental but only pragmatic, may not make sense in the very long run.
* Iason: consent is critical.
* What does time add to robust epistemic certainty?
Iason Gabriel: It’s quite likely that if you spend a long time thinking about something, at the end of it, you’ll be like, “Okay, now I have more confidence in a proposition that was on the table when I started?” But does that mean that it is actually substantively justified? And what are you going to say if you think you’re substantively justified, but you can’t actually justify it to other people who are reasonable, rational and informed like you.
It seems to me that even after a thousand years, you’d still be taking a leap of faith of the kind that we’ve seen people take in the past with really, really devastating consequences.** I don’t think it’s the case that ultimately there will be a moral theory that’s settled and the confidence in the truth value of it is so high that the people who adhere to it have somehow gained the right to kind of run with it on behalf of humanity. **Instead, I think that we have to proceed a small step at a time, possibly in perpetuity **and make sure that each one of these small decisions is subject to continuous negotiation, reflection and democratic control**.
Lucas Perry: The long reflection though, to me, seems to be about questions like that because you’re taking a strong epistemological view on meta-ethics and that there wouldn’t be that kind of clarity that would emerge over time from minds far greater than our own. From my perspective, I just find the problem of suffering to be very, very, very compelling.
**the hypothesis that, you can create moral arguments that are so well-reasoned that they persuade anyone is, I think, the perfect statement of a certain enlightenment perspective on philosophy that sees rationality as the tiebreaker and the arbitrar of progress.** In a sense that the whole project that I’ve outlined today rests upon a recognition or an acknowledgement that that is probably unlikely to be true when people reason freely about what the good consist in. They do come to different conclusions.
And I guess, the kind of thing people would point to there as evidence is just the nature of moral deliberation in the real world. You could say that if there were these winning arguments that just won by force of reason, we’d be able to identify them. But, in reality, when we look at how moral progress has occurred, requires a lot more than just reason giving. To some extent, I think the master argument approach itself rests upon mistaken assumptions and that’s why I wanted to go in this other direction. By a twist of fate, if I was mistaken and if the master argument was possible, it would also satisfy a lot of conditions of political legitimacy. And right now, we have good evidence that it isn’t possible so we should proceed in one way. If it is possible, then those people can appeal to the political processes.
2021-10-01 Re-listen
"True moral theory" approach. Eventually we get that, then align AGI with that. Argues against that in paper:
1. How would we ever know we'd found the true theory?
2. How do we persuade others that we have the true moral theory? Just knowing we have it doesn't necessarily give us the right to impose it on others.
- Rousseau: coercion is justified, people should be forced to be free.
- Iason: we need to avoid domination. So we need tools from political philosophy, which particularly in liberal tradition, has tried to answer this question of how can we all live together on reasonable terms that preserve everyone's capacity to flourish despite the fact that we have variation in what we ultimately believe to be just, true and right.