Who Decides What’s Ethical in AI? Let’s Talk About It

Understanding Ethics in AI: Whose Rules Are We Following?

Ethics in AI is becoming a hot topic as these systems are more and more involved in crucial parts of our lives — from hiring decisions to healthcare, from policing methods to even warfare. But here’s the kicker: while everyone agrees ethics matter, there’s no clear consensus on whose ethics we should follow or who actually gets to set the rules.

Thinking about ethics in AI feels a bit like standing in the middle of a crowded room where different voices shout different rules. Should engineers be the ones deciding? Or maybe policy makers? Philosophers, tech CEOs, or the voters? It’s pretty tough because each group values different things and has different perspectives on accountability and fairness.

I recently had a deep conversation with an AI ethics researcher, and what stood out was this uneasy truth — the rules around AI ethics seem vague, often controlled by big corporations, and usually made reactively instead of proactively. So, when AI decides who gets hired or who faces law enforcement scrutiny, we’re often trusting invisible guidelines that no one fully agrees on.

Whose Ethics Should Guide AI?

The “ethics in AI” debate isn’t just about technology — it’s about human values and judgment. For example, engineers might focus on what’s technically feasible and safe, while policy makers might stress legal compliance and public interest. Philosophers raise questions about morality and rights, CEOs might emphasize business interests, and ordinary people want fairness and transparency.

This mix makes it tricky. Consider the example of AI in hiring: if a company uses AI to scan resumes, how do we ensure it isn’t biased? Whose idea of “fair” gets prioritized? It’s not always straightforward.

Accountability: Who’s Responsible?

With AI making impactful decisions, accountability becomes a big question. Who do we hold responsible if AI causes harm? The developers? The companies? Or is it the regulators who failed to set proper guidelines? Ethics in AI goes hand-in-hand with governance — setting up the right oversight to make sure these systems do what we want without causing unintended damage.

What Can We Do?

The conversation about ethics in AI is ongoing and evolving. Here are a few ideas that are gaining traction:
Inclusive dialogues: Bringing a wider variety of voices into the discussion — not just experts but people affected by AI too.
Transparent guidelines: Creating clear, accessible rules about how AI can be used ethically.
Continuous review: Ethics in AI isn’t a one-time checklist. It requires ongoing assessment as technology and society change.

If you want to explore this topic further, here’s a great episode on AI ethics where a researcher dives into these questions alongside me.

Why It Matters

Talking about ethics in AI might seem abstract, but it has real consequences. These frameworks shape who gets opportunities, who’s protected, and who might face unfair treatment because of an opaque algorithm.

We may not have all the answers, but the discussion itself is vital. After all, these are technologies built by humans, for humans — so it’s on us to decide the rules of the game.


For further reading on AI ethics and governance, check out resources from The Partnership on AI, and explore how the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is shaping guidelines.

Let’s keep this conversation going — it’s one worth having as AI becomes part of everyday life.