A list of resources for people interested in learning what some discussions on AI safety. The field moves fast, so this list may be outdated.
For
- Marc Andressen on Why AI will save the world
- Marc Andressen on Lex Fridman Podcast | Lex Fridman’s Website Link
- George Hotz on AI safety people and how open sourcing AI is the solution
- Also look into Yan Lecun and the Effective Accelerationism movement.
Against
Yudkowsky since he’s one of the most outspoken (and influential) voices in the this camp
- Eliezer Yudkowsky AI list of lethalities
- Eliezer Yudkowsky on shutting down AI
- Eliezer Yudkowsky on how AI will kill everyone
- Also look into people like Max Tegmark, Nick Bostrom, Stuart Russell, etc…
- Take a look at LessWrong or the the AI Alignment Forum.
Debates
- Munk Debate on Artificial Intelligence | Bengio & Tegmark vs. Mitchell & LeCun
- George Hotz vs Eliezer Yudkowsky AI Safety Debate
Books
- Superintelligence: Paths, Dangers, Strategies On what a superintelligent being is and how it could change the world. Also introduces and discusses the alignment problem and why it’s so hard. Ideas in this book are the basis of a LOT of the current discussions on AI safety.
Other
- Sam Altman CEO of OpenAI on fear of AI
- Eliezer Yudkowsky on the Singularity Yudkowsky in his pre-doom days. Good explanation of what the singularity is.