At a Glance
- Tasks: Lead safety evaluations and develop automated benchmarks for cutting-edge AI models.
- Company: Join a pioneering team from top AI companies focused on open superintelligence.
- Benefits: Top-tier salary, comprehensive health benefits, and generous parental leave.
- Why this job: Make a real impact in AI safety while shaping the future of technology.
- Qualifications: Graduate degree in Computer Science or equivalent experience in AI Safety required.
- Other info: Dynamic startup environment with opportunities for personal and professional growth.
The predicted salary is between 48000 - 72000 £ per year.
Our Mission Reflection’s mission is to build open superintelligence and make it accessible to all. We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.
About The Role Own the red-teaming and adversarial evaluation pipeline for Reflection’s models, continuously probing for failure modes across security, misuse, and alignment gaps. Work hand-in-hand with the Alignment team to translate safety findings into concrete guardrails, ensuring models behave reliably under stress and adhere to deployment policies. Validate that every release meets the lab’s risk thresholds before it ships, serving as a critical gatekeeper for our open weight releases. Develop scalable, automated safety benchmarks that evolve alongside our model capabilities, moving beyond static datasets to dynamic adversarial testing. Research and implement state-of-the-art jailbreaking techniques and defenses to stay ahead of potential vulnerabilities in the wild.
About You Graduate degree (MS or PhD) in Computer Science, Machine Learning, or related discipline, or equivalent practical experience in AI Safety. Deep technical understanding of LLM safety, including adversarial attacks, red-teaming methodologies, and interpretability. Strong software engineering capabilities with experience building automated evaluation pipelines or large-scale ML systems. Experience with Reinforcement Learning (RLHF/RLAIF) and how it impacts model safety and alignment is a strong plus. Thrive in a fast-paced, high-agency startup environment with bias toward action. Willing to make high-stakes decisions regarding model release and safety thresholds. Passionate about advancing the frontier of intelligence.
What We Offer We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models. We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.
- Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
- Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
- Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
- Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
- Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
Member of Technical Staff - Safety Lead employer: Reflection AI
Contact Detail:
Reflection AI Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Member of Technical Staff - Safety Lead
✨Tip Number 1
Network like a pro! Reach out to folks in the AI safety space, especially those who have worked at places like DeepMind or OpenAI. A friendly chat can open doors and give you insights that job descriptions just can't.
✨Tip Number 2
Show off your skills! If you've got experience with red-teaming or adversarial testing, create a portfolio or a GitHub repo showcasing your projects. This gives potential employers a taste of what you can bring to the table.
✨Tip Number 3
Prepare for interviews by diving deep into the latest trends in AI safety. Brush up on jailbreaking techniques and how they relate to model vulnerabilities. Being well-versed will help you stand out as a candidate who’s truly passionate about the field.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets the attention it deserves. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace Member of Technical Staff - Safety Lead
Some tips for your application 🫡
Show Your Passion: When writing your application, let your enthusiasm for AI safety and superintelligence shine through. We want to see that you’re genuinely excited about the work we do and how you can contribute to our mission.
Tailor Your Experience: Make sure to highlight your relevant experience in AI safety, red-teaming, and software engineering. We love seeing how your background aligns with the role, so don’t hold back on those specific projects or achievements!
Be Clear and Concise: Keep your application straightforward and to the point. We appreciate clarity, so avoid jargon unless it’s necessary. Make it easy for us to understand your qualifications and why you’d be a great fit for our team.
Apply Through Our Website: We encourage you to submit your application directly through our website. It’s the best way for us to receive your details and ensures you’re considered for the role. Plus, it’s super easy!
How to prepare for a job interview at Reflection AI
✨Know Your Stuff
Make sure you have a solid grasp of LLM safety, adversarial attacks, and red-teaming methodologies. Brush up on the latest trends in AI safety and be ready to discuss how your experience aligns with the role's requirements.
✨Showcase Your Problem-Solving Skills
Prepare to share specific examples of how you've tackled challenges in previous roles, especially related to safety evaluations or automated pipelines. Use the STAR method (Situation, Task, Action, Result) to structure your responses.
✨Understand Their Mission
Familiarise yourself with Reflection’s mission to build open superintelligence. Be prepared to discuss how you can contribute to this goal, particularly in developing scalable safety benchmarks and ensuring model reliability.
✨Ask Insightful Questions
Prepare thoughtful questions that demonstrate your interest in the role and the company. Inquire about their current projects, team dynamics, or how they approach model safety and alignment. This shows you're engaged and serious about the position.