At a Glance
- Tasks: Join our AGI safety monitoring team to develop tools that enhance AI safety.
- Company: Apollo Research, a leader in AI safety and risk management.
- Benefits: Competitive salary, unlimited vacation, flexible hours, and professional development budget.
- Why this job: Make a real-world impact on AI safety while working with cutting-edge technology.
- Qualifications: 2+ years in empirical research with AI systems and strong Python skills.
- Other info: Collaborative environment with opportunities for rapid growth and innovation.
The predicted salary is between 80000 - 126000 ÂŁ per year.
Join our new AGI safety monitoring team and help transform complex AI research into practical tools that reduce risks from AI. As an applied researcher, you’ll work closely with our CEO, monitoring engineers and Evals team software engineers to build tools that make AI agent safety accessible at scale. We are building tools that monitor AI coding agents for safety and security failures. You will join a small team and will have significant ability to shape the team & tech, and have the ability to earn responsibility quickly. You will like this opportunity if you’re passionate about using empirical research to make AI systems safer in practice. You enjoy the challenge of translating theoretical AI risks into concrete detection mechanisms. You thrive on rapid iteration and learning from data. You want your research to directly impact real-world AI safety.
Key Responsibilities
- Systematically collect and catalog coding agent failure modes from real-world instances, public examples, research literature, and theoretical predictions.
- Design and conduct experiments to test monitor effectiveness across different failure modes and agent behaviors.
- Build and maintain evaluation frameworks to measure progress on monitoring capabilities.
- Iterate on monitoring approaches based on empirical results, balancing detection accuracy with computational efficiency.
- Stay current with research on AI safety, agent failures, and detection methodologies.
- Stay current with research into coding security and safety vulnerabilities.
Monitor Design & Optimization
- Develop a comprehensive library of monitoring prompts tailored to specific failure modes (e.g., security vulnerabilities, goal misalignment, deceptive behaviors).
- Experiment with different reasoning strategies and output formats to improve monitor reliability.
- Design and test hierarchical monitoring architectures and ensemble approaches.
- Optimize log pre-processing pipelines to extract relevant signals while minimizing latency and computational costs.
- Implement and evaluate different scaffolding approaches for monitors, including chain-of-thought reasoning, structured outputs, and multi-step verification.
Future Projects (likely not in the first 6 months)
- Fine-tune smaller open-source models to create efficient, specialized monitors for high-volume production environments.
- Design and build agentic monitoring systems that autonomously investigate logs to identify both known and novel failure modes.
Job Requirements
- 2+ years of experience conducting empirical research with large language models or AI systems.
- Strong experience with AI coding agents, having extensively used and compared frontier coding agents.
- Experience with LLM-as-a-judge setups.
- Experience designing and running experiments, analyzing results, and iterating based on empirical findings (e.g., prompting, scaffolding, agent design, fine-tuning, or RL).
- Strong Python programming skills.
- Demonstrated ability to work independently on open-ended research problems.
Bonus
- Experience with AI evaluation frameworks, in particular Inspect (though other frameworks are relevant as well).
- Familiarity with AI safety concepts, particularly agent-related risks.
- Familiarity with computer security (e.g., security testing and secure system design).
- Experience fine-tuning language models or working with smaller open-source models.
- Previous work building developer tools or monitoring systems.
- Publications or contributions to AI safety or ML research.
- Experience with production log systems or production log analysis.
What You’ll Accomplish in Your First Year
- Build a comprehensive failure mode database: Systematically collect and categorize 100+ distinct AI agent failure modes across safety and security dimensions, creating the foundation for our monitoring library.
- Develop and validate monitoring approaches: Create and empirically test monitoring prompts and strategies for key failure categories, establishing clear metrics for monitor performance and building evaluation frameworks to track progress.
- Optimize the monitoring pipeline: Improve log preprocessing and monitor scaffolding to achieve measurable improvements in detection accuracy, false positive rates, and computational efficiency.
- Advance monitoring capabilities: Begin work on advanced approaches such as fine-tuned specialized monitors or agentic investigation systems, moving our monitoring from reactive detection toward proactive risk identification.
Representative Projects
- Hierarchical monitoring for coding agent security: Design a multi-layer monitoring system for detecting security vulnerabilities introduced by coding agents. Start by cataloging common security failure modes (e.g., hardcoded credentials, SQL injection vulnerabilities, insecure API calls). Build specialized monitors for each category, then create a hierarchical system where fast, efficient first-pass monitors flag potentially problematic code for deeper investigation by more sophisticated monitors. Validate the system on synthetic test cases and real agent outputs, iterating to optimize the tradeoff between detection rates and false positives while maintaining sub-second latency for most monitoring decisions.
Benefits
- Salary: 100k – 180k GBP (approx. 135k – 245k USD).
- Flexible work hours and schedule.
- Unlimited vacation.
- Unlimited sick leave.
- Lunch, dinner, and snacks are provided for all employees on workdays.
- Paid work trips, including staff retreats, business trips, and relevant conferences.
- A yearly $1,000 (USD) professional development budget.
Logistics
- Start Date: Target of 2-3 months after the first interview.
- Time Allocation: Full-time.
- Location: The office is in London, and the building is next to the London Initiative for Safe AI (LISA) offices. This is an in-person role. In rare situations, we may consider partially remote arrangements on a case-by-case basis.
- Work Visas: We can sponsor UK visas.
About the Team
The monitoring team is a new team. Especially early on, you will work closely with Marius Hobbhahn (CEO), Jeremy Neiman (engineer) and others on the monitoring team. You’ll also sometimes work with our SWEs, Rusheb Shah, Andrei Matveiakin, Alex Kedrik, and Glen Rodgers to translate our internal tools into externally usable tools. Furthermore you will interact with our researchers, since we intend to “our own customer” by using our tools internally for our research work.
About Apollo Research
The rapid rise in AI capabilities offers tremendous opportunities, but also presents significant risks. At Apollo Research, we’re primarily concerned with risks from Loss of Control, i.e. risks coming from the model itself rather than e.g. humans misusing the AI. We’re particularly concerned with deceptive alignment/scheming, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. We work on the detection of scheming (e.g., building evaluations), the science of scheming (e.g., model organisms), and scheming mitigations (e.g., anti-scheming and control). We closely work with multiple frontier AI companies, e.g. to test their models before deployment or collaborate on scheming mitigations.
Equality Statement
Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.
How to Apply
Please complete the application form with your CV. The provision of a cover letter is neither required nor encouraged. Please also feel free to share links to relevant work samples.
Your Privacy and Fairness in Our Recruitment Process
We are committed to protecting your data, ensuring fairness, and adhering to workplace fairness principles in our recruitment process. To enhance hiring efficiency, we use AI-powered tools to assist with tasks such as resume screening. These tools are designed and deployed in compliance with internationally recognized AI governance frameworks. Your personal data is handled securely and transparently. We adopt a human-centred approach: all resumes are screened by a human and final hiring decisions are made by our team. If you have questions about how your data is processed or wish to report concerns about fairness, please contact us at hr@apollo-research.com.
Referrals increase your chances of interviewing at Apollo Research by 2x.
Location: London, England, United Kingdom.
Applied Researcher (Monitoring) employer: Apollo Research
Contact Detail:
Apollo Research Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Applied Researcher (Monitoring)
✨Tip Number 1
Network like a pro! Reach out to people in the AI safety field, especially those connected to Apollo Research. A friendly chat can open doors and give you insights that might just set you apart from other candidates.
✨Tip Number 2
Show off your skills! If you've got relevant projects or research, don’t hesitate to share them. Create a portfolio or a GitHub repo showcasing your work with AI coding agents and monitoring systems. It’s a great way to demonstrate your expertise.
✨Tip Number 3
Prepare for the interview by diving deep into AI safety topics. Brush up on current trends and challenges in the field. Being able to discuss these intelligently will show you're not just passionate but also knowledgeable about the role.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen. Plus, it shows you’re genuinely interested in joining our team at Apollo Research. Don’t miss out on this opportunity!
We think you need these skills to ace Applied Researcher (Monitoring)
Some tips for your application 🫡
Get Your CV Spot On: Make sure your CV is tailored to the Applied Researcher role. Highlight your experience with AI systems and empirical research, and don’t forget to showcase your Python skills. We want to see how you can contribute to our mission!
Showcase Relevant Work Samples: If you've got any projects or papers that relate to AI safety or coding agents, share them! Links to your work can really help us understand your expertise and passion for the field.
Keep It Simple: We’re not asking for a cover letter, so keep your application straightforward. Just fill out the form on our website and let your CV do the talking. We appreciate clarity and conciseness!
Apply Early: Since we review applications on a rolling basis, it’s best to get your application in sooner rather than later. Don’t wait until the deadline; show us your enthusiasm for joining our team!
How to prepare for a job interview at Apollo Research
✨Know Your AI Safety Concepts
Make sure you brush up on AI safety concepts, especially those related to agent risks. Familiarity with the latest research and methodologies will show your passion for the field and your commitment to making AI systems safer.
✨Showcase Your Empirical Research Skills
Prepare to discuss your previous experience with empirical research, particularly with large language models or AI systems. Be ready to share specific examples of experiments you've designed, how you analysed results, and what iterations you made based on your findings.
✨Demonstrate Your Python Proficiency
Since strong Python programming skills are a must, be prepared to talk about your coding experience. You might even want to bring along a project or two that showcases your ability to build tools or systems relevant to AI monitoring.
✨Engage with the Team's Vision
Understand Apollo Research's mission and the specific goals of the AGI safety monitoring team. Show enthusiasm for their projects and be ready to discuss how your skills can contribute to their vision of transforming AI safety into practical tools.