I\βm working with a cutting-edge AI safety start-up that\βs hiring a
Research Engineer (AI Alignment & Safety)
.
What You\βll Do
- Evaluate advanced AI systems and detect potential risks (e.g. deceptive behaviors)
- Work on interpretability research to uncover how models really work
- Build tools that turn research into scalable, production-ready evaluations
What we\βre looking for:
- Strong Python + ML/NN background
- Experience in AI safety, alignment, or interpretability research
- Ability to write clean, production-quality code
- Curiosity, analytical mindset, and strong communication
Great opportunity to work with leading researchers and have a real impact on safe AI development. Competitive pay + benefits included.
Locations
London
England
Contact Detail:
Seer Recruiting Team