Some things I've been reading over the past few days or weeks.
See also: [[Book pile]].
_Last updated: 2023-03-31 19:00_
_This list is automatically generated._
**AI**
- Nobody’s on the ball on AGI alignment
- gatesnotes.com-The Age of AI has begun
- ‘On With Kara Swisher’: Sam Altman on the GPT-4 R…
- Avoiding Existential Threats as a Non-Zero-Sum Ga…
- GPT-4 Technical Report
- More information about the dangerous capability evaluations we did with GPT-4 and Claude. - LessWrong
- (My understanding of) What Everyone in Technical Alignment is Doing and Why - AI
- Building Secure & Reliable Systems
- Acceleration. - by Ethan Mollick - One Useful Thing
- Sparks of Artificial General Intelligence - Early experiments with GPT-4
- Don't accelerate problems you're trying to solve - LessWrong
- Thoughts on the impact of RLHF research - LessWrong
- gpt-4-system-card
- Pinker on Alignment and Intelligence as a "Magical Potion"
- Natural Selection Favors AIs over Humans - Dan Hendrycks
- Some%20high-level%20thoughts%20on%20the%20DeepMind%20alignment%20team's%20strate
- Here’s What It Would Take To Slow or Stop AI - Jon Stokes
- Is Power-Seeking AI an Existential Risk? - Joe Carlsmith
- Cyborgism - LessWrong
- Alignment Curriculum - AGI Safety Fundamentals
- Preventing an AI-related catastrophe - 80,000 Hours
- Risk, Again - by Robin Hanson - Overcoming Bias
**Nick Bostrom**
- Existential Risks Analyzing Human Extinction Scenarios
**Joe Carlsmith**
- Seeing more whole - Joe Carlsmith
**Accellerationism**
- E12- Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jez