Learn why AI safety matters
One of the most important things you can do to help with AI alignment and the existential risk (x-risk) that superintelligence poses, is to learn about it. Here are some resources to get you started.
Websites
- AISafety.com & AISafety.info . The landing pages for AI Safety. Learn about the risks, communities, events, jobs, courses, ideas for how to mitigate the risks and more!
- AISafety.dance . A more fun, friendly and interactive introduction to the AI catastrophic risks!
- AISafety.world . The whole AI Safety landscape with all the organizations, media outlets, forums, blogs, and other actors and resources.
- IncidentDatabase.ai . Database of incidents where AI systems caused harm.
Newsletters
- PauseAI Substack : Our newsletter.
- TransformerNews Comprehensive weekly newsletter on AI safety and governance.
- Don’t Worry About The Vase : A newsletter about AI safety, rationality, and other topics.
Videos
- Kurzgesagt - A.I. ‐ Humanity’s Final Invention? (20 mins). The history of AI, and an introduction to the concept of superintelligence.
- 80k hours - Could AI wipe out humanity? (10 mins). A great introduction to the problem, from a down-to-earth perspective.
- Superintelligent AI Should Worry You… (1 min). The best super short introduction.
- Don’t look up - The Documentary: The Case For AI As An Existential Threat (17 mins). Powerful and nicely edited documentary about the dangers of AI, with many expert quotes from interviews.
- Countries create AI for reasons (10 mins). Caricature of the race to a superintelligence and its dangers.
- Max Tegmark | Ted Talk (2023) (15 mins). AI capabilities are improving quicker than expected.
- Tristan Harris | Nobel Prize Summit 2023 (15 mins). Talk in why we need to “Embrace our paleolithic brains, upgrade our medieval institutions and bind god-like technology”.
- Sam Harris | Can we build AI without losing control over it? (15 mins). Ted talk about the crazy situation we’re in.
- Ilya: the AI scientist shaping the world (12 mins). Co-founder and former Chief Scientist at OpenAI explains how AGI will take control over everything and that’s why we must teach them to care for humans.
- Exploring the dangers from Artificial Intelligence (25 mins). Summary of cybersecurity, biohazard and power-seeking AI risks.
- Why this top AI guru thinks we might be in extinction level trouble | The InnerView (26 mins). Interview with Connor Leahy on AI X-risks on television.
- The AI Dilemma (1hr). Presentation about the dangers of AI and the race which AI companies are stuck in.
- Robert Miles’ YouTube videos are a great place to start understanding most of the fundamentals of AI alignment.
Podcasts
- Future of Life Institute | Connor Leahy on AI Safety and Why the World is Fragile . Interview with Connor about the AI Safety strategies.
- Lex Fridman | Max Tegmark: The Case for Halting AI Development . Interview that dives into the details of our current dangerous situation.
- Sam Harris | Eliezer Yudkowsky: AI, Racing Toward the Brink . Conversation about the nature of intelligence, different types of AI, the alignment problem, Is vs Ought, and more. One of many episodes Making Sense has on AI Safety.
- Connor Leahy, AI Fire Alarm . Talk about the intelligence explosion and why it would be the most important thing that could ever happen.
- The 80,000 Hours Podcast recommended episodes on AI . Not 80k hours long, but a compilation of episodes of The 80,000 Hours Podcast about AI Safety.
- Future of Life Institute Podcast episodes on AI . All of the episodes of the FLI Podcast on the future of Artificial Intelligence.
Podcasts featuring PauseAI members can be found in the media coverage list.
Articles
- The ‘Don’t Look Up’ Thinking That Could Doom Us With AI (by Max Tegmark)
- Pausing AI Developments Isn’t Enough. We Need to Shut it All Down (by Eliezer Yudkowsky)
- The Case for Slowing Down AI (by Sigal Samuel)
- The AI Revolution: The Road to Superintelligence (by WaitButWhy)
- How rogue AIs may arise (by Yoshua Bengio)
- Reasoning through arguments against taking AI safety seriously (by Yoshua Bengio)
If you want to read what journalists have written about PauseAI, check out the list of media coverage .
Books
- Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World (Darren McKee, 2023). Get it for free !
- The Precipice: Existential Risk and the Future of Humanity (Toby Ord, 2020)
- The Alignment Problem (Brian Christian, 2020)
- Human Compatible: Artificial Intelligence and the Problem of Control (Stuart Russell, 2019)
- Life 3.0: Being Human in the Age of Artificial Intelligence (Max Tegmark, 2017)
- Superintelligence: Paths, Dangers, Strategies (Nick Bostrom, 2014)
- Our Final Invention: Artificial Intelligence and the End of the Human Era (James Barrat, 2013)
Courses
- AGI safety fundamentals (30hrs)
- CHAI Bibliography of Recommended Materials (50hrs+)
- AISafety.training : Overview of training programs, conferences, and other events
Organizations
- Future of Life Institute started the open letter , led by Max Tegmark.
- FutureSociety
- Conjecture . Start-up that is working on AI alignment and AI policy, led by Connor Leahy.
- Existential Risk Observatory . Dutch organization that is informing the public on x-risks and studying communication strategies.
- Center for AI Safety (CAIS) is a research center at the Czech Technical University in Prague, led by
- Center for Human-Compatible Artificial Intelligence (CHAI), led by Stuart Russell.
- Machine Intelligence Research Institute (MIRI), doing mathematical research on AI safety, led by Eliezer Yudkowsky.
- Centre for the Governance of AI
- Institute for AI Policy and Strategy (IAPS)
- The AI Policy Institute
- AI Safety Communications Centre
- The Midas Project Corporate pressure campaigns for AI safety.
- The Human Survival Project
- AI Safety World Here’s an overview of the AI Safety landscape.
If you are convinced and want to take action
There are many things that you can do . Writing a letter, going to a protest, donating some money or joining a community is not that hard! And these actions have a real impact. Even when facing the end of the world, there can still be hope and very rewarding work to do.
Or if you still don’t feel quite sure of it
Learning about the psychology of x-risk could help you.