It’s a great marketing pitch, but what happens when we treat deep human issues like technical bugs? Let’s take an honest look at the promise of AI for social good.
You’ve seen the headlines, right? “AI to Solve Climate Change,” “How AI is Ending World Hunger,” “An Algorithm to Fix Inequality.” It sounds incredible. The promise of AI for social good suggests that we can finally use our most advanced technology to solve our oldest, most complicated human problems. And I’ll be honest, a part of me wants to believe it. It’s a comforting thought.
But lately, I’ve been thinking about it more, and it feels like we’re being sold a story. It’s the idea that messy, deeply-rooted social issues are basically just technical bugs waiting for the right line of code to fix them. And the more you peel back the layers, the more you realize that this view isn’t just overly optimistic—it might be actively harmful.
The Seductive Trap of Techno-Solutionism
There’s a term for this: “techno-solutionism.” It’s the belief that every problem, no matter how complex, has a technological solution. It’s treating a political or historical crisis like a broken laptop. Just run a diagnostic, find the bug, patch it, and reboot.
But human problems don’t work that way. Poverty isn’t a bug in the system; it is the system for many people, built over centuries of policy, history, and human behavior. You can’t just throw an algorithm at it and expect a clean fix. Trying to do so ignores the one thing that truly matters: context.
Think of it this way: you wouldn’t try to fix a crumbling bridge by giving everyone a faster car. The cars might be great, but they do nothing to address the foundational problem. In the same way, an AI model might be able to predict where a famine is likely to occur, but it can’t untangle the political corruption, supply chain failures, or historical conflicts that actually caused it.
The Problem with ‘AI for Social Good’: Data and Bias
So, where does the data for these AI systems come from? It comes from our world. And our world, as we know, is full of biases, prejudices, and inequality. AI learns from the data we give it, and if that data is biased, the AI will be, too. It doesn’t just learn our patterns; it learns our flaws and then amplifies them with terrifying efficiency.
We’ve seen this happen over and over again.
* Hiring algorithms that penalize female candidates because they were trained on historical data from a male-dominated industry.
* Facial recognition systems that are less accurate for people of color, leading to false accusations.
* Loan-approval AI that deepens existing economic disparities.
These aren’t just technical glitches. They are reflections of the societal biases embedded in the data we feed the machines. As the American Civil Liberties Union (ACLU) points out, AI can easily deepen existing racial and economic disparities if we’re not incredibly careful. An “AI for social good” initiative built on biased data isn’t for social good at all—it’s just a high-tech way to maintain the status quo.
Who Really Benefits from This Narrative?
This is the big question for me. When a massive tech company launches a splashy AI for social good program, who is it really for?
Of course, it’s fantastic PR. It positions the company as a benevolent force for change, a savior with a server farm. This can be a convenient way to distract from other, less flattering conversations about their business, like data privacy, monopolistic practices, or the environmental impact of their data centers.
It also reinforces the idea that these companies are the only ones with the tools and the brilliance to solve the world’s problems. It takes power and agency away from the communities actually experiencing the issues and puts it in the hands of engineers thousands of miles away. True, lasting solutions require listening to people and empowering them—not imposing a technical solution from the outside. Groups like the Electronic Frontier Foundation (EFF) are constantly exploring the complex relationship between technology and civil liberties, reminding us that the human element is non-negotiable.
So, am I saying AI can never be used for good? No, not at all. It can be a powerful tool for analysis, for finding patterns, and for helping humans make better decisions. But it’s just that—a tool. It’s not a savior.
The next time you see a grand promise about AI for social good, I think it’s healthy to be a little skeptical. We should ask the tough questions: Who built this? What data is it using? Who is being left out of the conversation? And most importantly, who does this really serve?
Because real change isn’t about finding the perfect algorithm. It’s about doing the messy, complicated, and deeply human work of building a better world, one conversation at a time.