(Top)

Take action

The group of people who are aware of AI risks is still small.

You are now one of them.

Your actions matter more than you think.

Here are some examples of what you can do.

Inform others

  • Talk to people in your life about this. Answer their questions, and get them to act. Use our counterarguments to help you be more persuasive.
  • Share about AI risk on social media. This website might be a good start.
  • Tabling and flyering are great ways to reach many people in a short amount of time.
    • Having some printed one-page flyers on hand, or easily available to show and share from your phone, can be a quick and useful resource for faqs, talking points, expert quotes and opinions, online resources, etc.
  • Create articles , videos or memes
    . Collaborate with others in the Discord server
    (in the #projects channel).

Inform politicians

Join the movement

Learn more

If you…

If you are a politician or work in government

  • Prepare for the next AI safety summit . Form coalitions with other countries. Work towards a global treaty.
  • Invite (or subpoena) AI lab leaders to parliamentary/congressional hearings to give their predictions and timelines of AI disasters.
  • Establish a committee to investigate the risks of AI .
  • Make AI safety a priority in your party’s platform, your government’s policy, or just make sure it’s on the agenda.

If you know (international) law

If you work in AI

  • Don’t work towards superintelligence. If you have some cool idea on how we can make AI systems 10x faster, please don’t build it / spread it / talk about it. We need to slow down AI development, not speed it up.
  • Talk to your management and colleagues about the risks. Get them to take an institutional position on this.
  • Hold a seminar on AI safety at your workplace. Check out the videos for inspiration.
  • Sign the Statement on AI Risk
    .