At a Glance
- Tasks: Design and build internal tools for AI safety and human review processes.
- Company: Join Anthropic, a leading AI safety company with a mission to create beneficial AI systems.
- Benefits: Enjoy competitive salary, remote work options, and opportunities for professional growth.
- Other info: Dynamic team culture with a focus on diverse perspectives and career development.
- Why this job: Make a real impact on AI safety while working with cutting-edge technology.
- Qualifications: 4+ years in software engineering, experience with internal tools, and a passion for AI safety.
The predicted salary is between 60000 - 80000 £ per year.
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
The Safeguards team is responsible for the systems that detect, review, and act on misuse of Anthropic's models — work that sits at the very centre of our mission to develop AI safely. Within Safeguards, the Foundations team builds the platforms, infrastructure, and internal tools that the rest of the organisation depends on to do this well. We are looking for a software engineer to own and extend the internal tooling that powers human review — the case management, labelling, investigation, and enforcement interfaces our analysts and policy specialists use every day. These are back‑office tools, but they are anything but low‑stakes: the speed, clarity, and reliability of this tooling directly determines how quickly Anthropic can identify harmful behaviour, make sound enforcement decisions, and feed signal back into model training. You'll work closely with Trust & Safety operations, policy, and detection‑engineering teams to turn messy operational workflows into well‑designed, durable software. This is a hands‑on, full‑stack role for someone who enjoys building products for internal users, sweats the details of usability and correctness, and wants their engineering work to have a clear line to real‑world safety outcomes.
Responsibilities
- Design, build, and maintain the internal review and enforcement tooling used by Safeguards analysts — including case queues, content review surfaces, decision/audit logging, and account‑actioning workflows.
- Understand user workflows and establish tooling for well processes that may be distributed across a number of tools and UIs.
- Develop the ‘base layer’ of reusable APIs, data storage, and backend services that let new review workflows be stood up quickly and safely.
- Partner with operations and policy teams to understand reviewer pain points, then translate them into clear product improvements that reduce handling time and decision error.
- Integrate tooling with upstream detection systems and downstream enforcement infrastructure so that flagged behaviour flows cleanly from signal → human review → action.
- Build in the guardrails that sensitive internal tools require: granular permissions, audit trails, data‑access controls, and reviewer wellbeing features (e.g. content blurring, exposure limits).
- Instrument the tools you ship — surfacing metrics on queue health, reviewer throughput, and decision quality so the team can see what’s working.
- Contribute to the Foundations team’s shared platform and on‑call responsibilities.
You may be a good fit if you
- Have 4+ years of experience as a software engineer, with meaningful time spent building internal tools, operations platforms, or back‑office products.
- Are comfortable using agentic coding tools (e.g. Claude Code) as a core part of your workflow, and can direct them to ship well‑tested, production‑quality software at a high cadence without lowering the bar (our stack is mostly React/TypeScript and Python).
- Take a product‑mindful approach to internal users: you work with the people using your tools, watch where they struggle, and fix it.
- Are results‑oriented, with a bias towards flexibility and impact.
- Pick up slack, even if it goes outside your job description.
- Communicate clearly with non‑engineering stakeholders and can explain technical trade‑offs to operations and policy partners.
- Care about the societal impacts of your work and want to apply your engineering skills directly to AI safety.
Strong candidates may also
- Have built tooling in a trust & safety, content moderation, fraud, integrity, or risk‑operations setting.
- Have experience designing case‑management or workflow systems (queues, SLAs, escalation paths, audit logs).
- Have worked with sensitive data and understand the privacy, access‑control, and reviewer‑wellbeing considerations that come with it.
- Have experience with GCP/AWS, Postgres/BigQuery, and CI/CD in a production environment.
- Have used LLMs as a building block inside operational tools (e.g. assisted triage, summarisation, or classification in the review loop).
Representative projects
- Rebuilding the analyst review queue so cases are routed by severity and skill, with full decision history and one‑click escalation.
- Shipping a unified account‑investigation view that pulls signals from multiple detection systems into a single, permissioned surface.
- Adding content‑obfuscation and exposure‑tracking features to protect reviewers working with harmful material.
- Building an internal labelling tool that feeds high‑quality ground truth back to the detection and research teams.
Candidates need not have
- 100% of the skills listed above.
- Prior experience in AI or machine learning.
- Formal certifications or education credentials.
Logistics
- Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience.
- Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience.
- Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position.
- Location‑based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
- Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you’re interested in this work. We think AI systems like the ones we’re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you’re ever unsure about a communication, don’t click any links—visit anthropic.com/careers directly for confirmed position openings.
Software Engineer, Safeguards Foundations (Internal Tooling) in London employer: Anthropic
Contact Detail:
Anthropic Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Software Engineer, Safeguards Foundations (Internal Tooling) in London
✨Tip Number 1
Network like a pro! Reach out to current employees at Anthropic on LinkedIn or other platforms. Ask them about their experiences and any tips they might have for the application process. Personal connections can make a huge difference!
✨Tip Number 2
Prepare for interviews by diving deep into the company’s mission and values. Understand how your skills as a software engineer can contribute to AI safety. Tailor your responses to show that you’re not just a fit for the role, but also for the team culture.
✨Tip Number 3
Practice coding challenges and system design questions relevant to internal tooling. Use platforms like LeetCode or HackerRank to sharpen your skills. Being well-prepared will boost your confidence and help you shine during technical interviews.
✨Tip Number 4
Don’t forget to follow up after your interviews! A simple thank-you email expressing your appreciation for the opportunity can leave a lasting impression. It shows your enthusiasm and professionalism, which are key traits for a software engineer.
We think you need these skills to ace Software Engineer, Safeguards Foundations (Internal Tooling) in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter for the Software Engineer role. Highlight your experience with internal tools and back-office products, as this is key for us at Anthropic.
Showcase Your Technical Skills: Don’t forget to mention your proficiency in React, TypeScript, and Python. We want to see how you’ve used these technologies in past projects, especially in building operational platforms.
Emphasise User-Centric Design: We care about the end-users of our tools, so share examples of how you've improved user workflows or addressed pain points in your previous roles. This will show us your product-minded approach.
Apply Through Our Website: To make sure your application gets the attention it deserves, apply directly through our website. It’s the best way for us to keep track of your application and ensure it reaches the right people.
How to prepare for a job interview at Anthropic
✨Know Your Tech Stack
Make sure you’re familiar with the technologies mentioned in the job description, like React, TypeScript, and Python. Brush up on your coding skills and be ready to discuss how you've used these tools in past projects.
✨Understand User Workflows
Since this role involves building internal tools, it’s crucial to understand the workflows of the users. Think about how you can improve their experience and be prepared to share specific examples of how you've done this before.
✨Communicate Clearly
You’ll need to explain technical concepts to non-engineering stakeholders. Practice articulating your thoughts clearly and concisely, focusing on how your work impacts the team and the overall mission of AI safety.
✨Show Your Passion for AI Safety
Anthropic is all about creating safe AI systems. Be ready to discuss why AI safety matters to you and how your engineering skills can contribute to this mission. Share any relevant experiences that highlight your commitment to ethical technology.