At a Glance
- Tasks: Join us to enhance AI reliability through innovative engineering and monitoring systems.
- Company: Anthropic is on a mission to create safe, reliable AI for everyone.
- Benefits: Enjoy flexible hours, generous leave, and competitive pay in a collaborative environment.
- Why this job: Be part of groundbreaking AI research that impacts society positively and ethically.
- Qualifications: A Bachelor's degree or equivalent experience in software or systems engineering is required.
- Other info: We value diverse perspectives and encourage all candidates to apply, regardless of qualifications.
The predicted salary is between 43200 - 72000 £ per year.
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
Anthropic is seeking talented and experienced Reliability Engineers, including Software Engineers and Systems Engineers with experience and interest in reliability, to join our team. We will be defining and achieving reliability metrics for all of Anthropic’s internal and external products and services. While significantly improving reliability for Anthropic’s services, we plan to use the developing capabilities of modern AI models to reengineer the way we work. This team will be a critical part of Anthropic’s mission to bring the capabilities of groundbreaking AI technologies to benefit humanity in a safe and reliable way.
Responsibilities:
- Develop appropriate Service Level Objectives for large language model serving and training systems, balancing availability/latency with development velocity
- Design and implement monitoring systems including availability, latency and other salient metrics
- Assist in the design and implementation of high-availability language model serving infrastructure capable of handling the needs of millions of external customers and high-traffic internal workloads
- Develop and manage automated failover and recovery systems for model serving deployments across multiple regions and cloud providers
- Lead incident response for critical AI services, ensuring rapid recovery and systematic improvements from each incident
- Build and maintain cost optimization systems for large-scale AI infrastructure, focusing on accelerator (GPU/TPU/Trainium) utilization and efficiency
You may be a good fit if you:
- Have extensive experience with distributed systems observability and monitoring at scale
- Understand the unique challenges of operating AI infrastructure, including model serving, batch inference, and training pipelines
- Have proven experience implementing and maintaining SLO/SLA frameworks for business-critical services
- Are comfortable working with both traditional metrics (latency, availability) and AI-specific metrics (model performance, training convergence)
- Have experience with chaos engineering and systematic resilience testing
- Can effectively bridge the gap between ML engineers and infrastructure teams
- Have excellent communication skills
Strong candidates may also:
- Have experience operating large-scale model training infrastructure or serving infrastructure (>1000 GPUs)
- Have experience with one or more ML hardware accelerators (GPUs, TPUs, Trainium, e.g.)
- Understand ML-specific networking optimizations like RDMA and InfiniBand
- Have expertise in AI-specific observability tools and frameworks
- Understand ML model deployment strategies and their reliability implications
- Have contributed to open-source infrastructure or ML tooling
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The expected salary range for this position is:
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you’re interested in this work. We think AI systems like the ones we’re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we’re different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Staff Software Engineer, AI Reliability Engineering (London) employer: Anthropic
Contact Detail:
Anthropic Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Staff Software Engineer, AI Reliability Engineering (London)
✨Tip Number 1
Familiarise yourself with Anthropic's mission and values. Understanding their focus on reliable, interpretable, and steerable AI systems will help you align your responses during interviews and discussions, showcasing your genuine interest in their work.
✨Tip Number 2
Engage with the AI community by participating in relevant forums or attending meetups. This can help you network with professionals in the field and may even lead to referrals or insights about the role that could give you an edge.
✨Tip Number 3
Brush up on your knowledge of distributed systems and AI infrastructure. Being able to discuss specific challenges and solutions related to model serving and training pipelines will demonstrate your expertise and readiness for the role.
✨Tip Number 4
Prepare to discuss your experience with SLO/SLA frameworks and chaos engineering. Be ready to share examples of how you've implemented these in past roles, as this will highlight your practical skills and problem-solving abilities relevant to the position.
We think you need these skills to ace Staff Software Engineer, AI Reliability Engineering (London)
Some tips for your application 🫡
Understand the Role: Before applying, make sure you fully understand the responsibilities and requirements of the Staff Software Engineer position at Anthropic. Tailor your application to highlight your relevant experience in reliability engineering and AI infrastructure.
Craft a Compelling 'Why Anthropic?' Response: This section is crucial. Clearly articulate why you want to work at Anthropic and how your values align with their mission of creating reliable and beneficial AI systems. Aim for 200-400 words that reflect your genuine interest.
Highlight Relevant Experience: In your CV and cover letter, emphasise your experience with distributed systems, SLO/SLA frameworks, and any familiarity with AI-specific metrics. Use specific examples to demonstrate your skills and achievements in these areas.
Proofread Your Application: Before submitting, thoroughly proofread your application materials. Check for spelling and grammatical errors, and ensure that your documents are well-organised and clearly formatted. A polished application reflects your attention to detail.
How to prepare for a job interview at Anthropic
✨Understand the Role and Responsibilities
Before your interview, make sure you thoroughly understand the responsibilities of a Staff Software Engineer in AI Reliability Engineering. Familiarise yourself with concepts like Service Level Objectives (SLOs), monitoring systems, and incident response. This will help you articulate how your experience aligns with their needs.
✨Showcase Your Technical Expertise
Be prepared to discuss your experience with distributed systems, AI infrastructure, and reliability metrics. Highlight any specific projects where you've implemented SLO/SLA frameworks or worked with large-scale model training infrastructure. Concrete examples will demonstrate your capability.
✨Communicate Effectively
Anthropic values strong communication skills, so practice explaining complex technical concepts in simple terms. Be ready to bridge the gap between ML engineers and infrastructure teams, showcasing your ability to collaborate across disciplines.
✨Prepare for Scenario-Based Questions
Expect scenario-based questions that assess your problem-solving skills in real-world situations. Think about past incidents you've managed, how you approached them, and what improvements you implemented afterwards. This will show your proactive mindset towards reliability engineering.