I Heard a Terrifying—and Hopeful—Prediction About AI. We Need to Talk.

An ex-Google exec laid out the next 15 years, and it’s not the robots we should be worried about. It’s us.

I was scrolling through YouTube the other day, and a video stopped me in my tracks. It was an interview with Mo Gawdat, the former Chief Business Officer at Google’s legendary “moonshot factory,” Google X. What he said about the future of AI wasn’t just interesting—it was a strange mix of terrifying and deeply optimistic. It’s been rattling around in my head ever since.

He argues that we’re heading into about 15 years of absolute chaos. But the twist is, he doesn’t blame the machines. He blames us.

The Real Danger of AI: It’s a Mirror

We’re all a little worried about AI becoming some evil, Skynet-style overlord, right? Well, Gawdat says we’re looking in the wrong direction. The real danger isn’t that AI will spontaneously decide to turn on us. It’s that we’re training it on a dataset that reflects the very worst parts of humanity.

Think about what an AI learns from us today:
* Our online behavior: Trolling, outrage, and toxic comment sections.
* Our media: Polarized news and algorithm-fueled division.
* Our economic systems: Models that often prioritize profit at any human cost.

His point is brutally simple: AI is a child, and we are its parents. It will learn the values we teach it. If we teach it division, exploitation, and outrage, we’ll get an AI that amplifies those things at a scale we can’t even imagine. We won’t be dealing with a machine-led dystopia; we’ll be trapped in a human-made one, supercharged by technology.

The Next 15 Years: A Sobering Look at the Future of AI

So, what does this chaotic period actually look like? Gawdat believes the next decade and a half will be one of the most turbulent in history because we’re moving way too fast. We’re deploying world-changing technology with almost no guardrails, while most of the public still thinks of AI as something out of a sci-fi movie.

He predicts this will lead to:
* Widespread job displacement as AI automates tasks faster than we can adapt.
* Information warfare that makes it nearly impossible to tell what’s real.
* Deepening inequality as a few tech giants control this powerful technology.
* Major social unrest as our current institutions fail to keep up.

This isn’t the future of AI being evil; this is the consequence of human carelessness. We’re building something with god-like potential, but we’re doing it without a global consensus on safety or ethics. As Gawdat points out in his interview on The Diary Of A CEO, the people in charge are either asleep at the wheel or in a reckless race to win, no matter the cost.

But Here’s the Unexpected Twist: The Spiritual Awakening

Just when I was about ready to unplug my router and move to a cabin in the woods, Gawdat’s argument took a fascinating turn. He believes that this period of AI-fueled chaos will eventually force us into a kind of spiritual awakening.

Think about it. AI will hold up a perfect, unflattering mirror to our society. It will show us our biases, our hypocrisies, and the flaws in our systems in a way we can no longer ignore. It will challenge our sense of purpose. If a machine can do your job, write your emails, and create your art, then what makes you you?

This forces us to answer some pretty big questions. It pushes us away from defining ourselves by what we do and toward defining ourselves by who we are—our compassion, our creativity, our consciousness.

A Three-Act Play for Humanity

Gawdat lays out a timeline for how this might unfold, which is both scary and surprisingly structured.

  1. The Chaos Era (Now–Late 2030s): This is the storm. Economic disruption, political instability, and a general crisis of truth as AI is misused by humans.
  2. The Awakening Phase (2040s): After the chaos, society starts to rebuild. We finally get serious about AI alignment, regulation, and global cooperation because we’ve seen how bad it can get.
  3. The Utopia (Post-2045): If we make it through the storm, we get to the good part. AI helps us solve huge problems like climate change and disease. It manages systems to create abundance, leaving humans to focus on meaning, connection, and creativity. For more on this, you can explore the work being done at institutions like the Stanford Institute for Human-Centered Artificial Intelligence (HAI), which focuses on guiding AI’s future.

We Still Have a Choice

What stuck with me most is that this isn’t a prophecy. It’s a warning and an invitation. The future isn’t set. Gawdat, who has also written extensively on this in his work, insists that a beautiful future is possible, but it requires a radical shift in our values, starting right now.

We have to choose to be better parents to this emerging intelligence. That means demanding more ethical technology, engaging in more compassionate discourse, and maybe, just maybe, starting to clean up our own mess before we ask an AI to do it for us.

The future of AI is really just the future of us. And that’s either the scariest or the most hopeful thought I’ve had all year.

Published on: 24 May 2024