From Coder to Contributor: How to Break Into AI Safety as a Software Engineer

Your background in software engineering isn’t a liability—it’s your greatest strength for getting into AI alignment and research. Here’s how to make the move.

It’s a familiar feeling for a lot of us in tech. You’ve been in the software game for a decade, maybe more. You’re good at it. You can architect systems, squash bugs, and lead a team. But there’s a quiet question that starts to bubble up: “Is this it?” You start reading about the incredible advancements in AI, and it’s not just the capabilities that catch your eye, but the profound questions surrounding it—especially AI safety, alignment, and interpretability. Suddenly, you have a new sense of curiosity, and you feel a pull toward making a transition to AI safety.

But then comes the second feeling: a wave of imposter syndrome. You look at the people in these fields and the descriptions for programs like the MATS Program or the OpenAI Residency, and it seems like they’re exclusively for PhDs from top universities who have a stack of published papers. As a traditional software engineer, it can feel like you’re on the outside looking in, with no clear path forward.

If that sounds like you, I get it. But I want to offer a different perspective. Your background isn’t a disadvantage; it’s a unique and powerful asset.

Why Your Engineering Skills Are Crucial for a Transition to AI Safety

Let’s get one thing straight: the field of AI safety desperately needs great engineers. While a lot of the discourse is philosophical and research-driven, the actual implementation of safe and aligned AI systems is an engineering problem. The most brilliant alignment theory in the world is useless if it can’t be translated into robust, scalable, and reliable code.

Think about your 11 years of experience. You know how to:

  • Build complex systems: You understand trade-offs, dependencies, and how small changes can have cascading effects. This is critical for understanding and mitigating risks in complex AI models.
  • Debug the unde-buggable: You’ve spent countless hours staring at code, trying to figure out why a system is behaving in an unexpected way. This is the very essence of interpretability—trying to understand the “black box.”
  • Apply rigorous standards: You know the importance of testing, redundancy, and creating systems that don’t fall over in the real world. The stakes in AI safety are just much, much higher.

Your practical, hands-on experience is a grounding force that many pure researchers don’t have. You’re not just thinking about abstract problems; you’re thinking about how they would actually be built and where they would break.

Creating Your “Research” Portfolio Without a PhD

The biggest hurdle for many engineers is the lack of a formal research background. How do you compete with people who have published papers and academic credentials? The answer is: you don’t compete on their terms. You create your own.

A “portfolio” in this space doesn’t have to be a list of peer-reviewed papers. It’s a collection of evidence that shows you can think critically, learn quickly, and apply your skills to new domains.

  • Start a Project: Don’t just read—build. Try to replicate the results of an interesting interpretability paper. Find an open-source AI safety project and contribute. Even a “failed” project is a fantastic learning experience you can write about. Your GitHub can become your portfolio.
  • Write About Your Journey: Start a blog, a Substack, or even just a public set of notes. Document what you’re learning, what confuses you, and what ideas you have. This demonstrates your ability to engage with the material seriously. You’re showing your work, and that’s often more valuable than a certificate.
  • Engage with the Community: The AI safety community is incredibly active online. Participate in forums like the Alignment Forum or LessWrong. Engage in thoughtful discussions. Your insights as an experienced engineer will be valued.

A Practical Look at Competitive AI Safety Programs

So, what about those residency programs? It’s true, they are highly competitive. But they aren’t just looking for a specific resume. They’re looking for people with a deep, demonstrated commitment to the field and a unique perspective. Your story—a senior engineer making a deliberate transition to AI safety—is a powerful one. It shows drive and a real-world perspective.

Organizations like 80,000 Hours provide fantastic career guides and resources that can help you understand the landscape and find paths beyond the most famous programs. They emphasize that there are many ways to contribute.

The goal of applying to a program like the MATS Program isn’t just to get in. The process of preparing your application—doing projects, writing up your thoughts, and clarifying your motivations—is valuable in itself. It forces you to build the very portfolio you need to move forward, whether you’re accepted or not. Some of these programs are specifically designed for people looking to switch fields, providing the mentorship and context you need. The OpenAI Residency is another great example of a program built to bring talented people from diverse fields into AI.

Don’t self-reject. Apply, but don’t let a single application define your journey. The real goal is to build your skills and knowledge, and that can happen regardless of an acceptance letter. The path for a software engineer into this field is less about formal education and more about focused, self-directed learning and building. It’s a marathon, not a sprint, but your journey is just beginning, and you’re starting from a much stronger place than you think.