See also: [[AI predictions]]; [[Reading inbox]]; [[AI journal]]; [[Learning to work with AI]]; [[AI scratchpad]] Overall takes: - [My p(doom) is less than 1/5. My p(utopia) is similar.] - [[All the questions and all the answers]] Study journal: - Part 1. [Who will rule the Earth in 2100?](https://sun.pjh.is/who-will-rule-the-earth-in-2100) - Part 2. [Digital Minds or Butlerian Accord? Choiceworthy scenarios for 2100.](https://sun.pjh.is/digital-minds-or-butlerian-accord-part-1) - Part 3. What does it mean to rule the Earth? - [[Wild ramblings]] Some important questions: - [Will AI systems be capable of flourishing?] - [Should we see AI systems as our descendants, or as an alien invasion, or a virus, or our tools?] - ... Timelines: - Models - Effective compute is the best option. That's Ajeya bioanchors and Davidson/Epoch. - Milestones - - Speed of capability growth - They'll be much much better within 5 years. - See [AI predictions]. - Speed of economic change - [[It usually takes decades for new general purpose technologies to be widely adopted]] - [[AI might transform the economy much faster than other general purpose technologies]] Observations: - [Everyone agrees that AI systems could cause some kinds of catastrophe in the next decade.] - [There are bunch of scenarios that everyone agrees would count as catastrophe, but another bunch where there is major disagreement] - [[AI systems will help us avoid other catastrophic and existential risks]] - [If humans are agents, so are AI systems] Technoptimism + Tradhumanism + AINotKillEveryoneism + amor fati. What do? - Improve policing. - Cybersecurity. - Figure out ways to mitigate impacts of leaked models. - Consider banning or strictly regulating particularly dangerous research. - Try to get people of excellent moral character into the positions of power, e.g. AI labs, relevant parts of natsec community, etc. Take Great Man theory of history seriously in the context of AI. - [[Butlerian Accord might be our least bad option, actually]] High-level - [[Trial and error is great until it kills you.]] - [[Catastrophe and extinction are very different]] - [[Growing the Borg]] - [[We know it works but we don't know how]] Alignment Governance - [[More things are politically possible than you think]] --- Older notes: - [Should we create AGI?](https://docs.google.com/document/d/1_EISwv11dpVV1SDYvRKZ2Q2hKFrVgAxmeLTq1u1ohBU/edit)