Facing Existential Dread Around AI: What Can We Do?

Understanding the complex fears of artificial intelligence and navigating the future wisely

Have you ever felt that uneasy, sinking feeling about the future of AI? That sense that artificial intelligence’s rapid advancements might bring not just innovation but also real risks to humanity? That feeling is what many describe as existential dread AI — a deep concern about what AI might mean for our existence and safety.

It’s easy to brush off such worries as sci-fi paranoia. But when you look at estimates from some AI researchers and studies, the risk of catastrophic outcomes, even extinction, can seem alarmingly high. Some experts suggest there could be a 75-90% chance of doing serious harm or worse within the next decade if we’re not careful. That’s enough to give anyone pause.

What is existential dread AI?

Existential dread AI isn’t just about fearing robots taking over. It’s a complex feeling tied to the unpredictability of developing artificial general intelligence (AGI) — AI systems that can learn, reason, and perform any intellectual task a human can. Beyond AGI lies superintelligence, where AI far surpasses human intelligence. The stakes go beyond malfunction or poor programming: there’s worry AI might act in ways impossible for us to control.

Why it’s hard to stay calm about AI risks

Even if we believe in “alignment” — the idea that we can design AI to share human values and goals — it’s not the whole picture. The reality is superintelligent AI will likely be too complex to fully align or control. Additionally, there’s the human factor: the risk that bad actors or hostile governments could exploit AI for harmful or even malicious purposes, potentially triggering conflicts or warfare.

What can we do about existential dread AI?

Feeling overwhelmed is natural, but there are ways to channel this dread constructively:

  • Stay informed and critical: Follow trustworthy sources like OpenAI, DeepMind, or The Future of Life Institute to learn about AI developments and safety efforts.
  • Support AI safety research: Organizations working on AI alignment and ethics play a crucial role in mitigating risks.
  • Engage in thoughtful conversations: Discuss your concerns with friends, experts, or community groups to gain perspective and reduce anxiety.
  • Focus on agency: Advocate for responsible AI policies and regulations in your local or national government.

A personal note

I get that existential dread AI can feel paralyzing, like there’s no clear way out of the shadow it casts. But acknowledging the problem is the first step to addressing it. Informed and active communities will be vital for guiding AI’s development in safer directions. We don’t have all the answers yet, but we can contribute to asking the right questions.

If you’re feeling weighed down by these fears, remember: you’re not alone, and your concerns are valid. The future of AI is uncertain, but our collective actions today will shape its impact tomorrow.