Anthropic Fellows Program — AI Safety in London
Anthropic Fellows Program — AI Safety

Anthropic Fellows Program — AI Safety in London

London Full-Time 92400 - 124800 £ / year (est.) No home office possible
Anthropic

At a Glance

  • Tasks: Conduct AI safety research and collaborate with top mentors on impactful projects.
  • Company: Join Anthropic, a leader in creating safe and beneficial AI systems.
  • Benefits: Receive a competitive stipend, funding for research, and access to a vibrant community.
  • Other info: Work in dynamic locations like London or Berkeley, with potential for remote options.
  • Why this job: Make a difference in AI safety while gaining invaluable experience and mentorship.
  • Qualifications: Fluency in Python and a strong technical background in relevant fields.

The predicted salary is between 92400 - 124800 £ per year.

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

The next cohort of Anthropic fellows starts on July 20, 2026. The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience. Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts, over 80% of fellows produced papers.

What to expect

  • 4 months of full-time research
  • Direct mentorship from Anthropic researchers
  • Access to a shared workspace (in either Berkeley, California or London, UK)
  • Connection to the broader AI safety and security research community
  • Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD + benefits (these vary by country)
  • Funding for compute (~$15k/month) and other research expenses

Compensation

The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).

Fellows workstreams

Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams. Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application.

You can see an overview of the current workstreams below:

Across the workstreams, you may be a good fit if you:

  • Are motivated by making sure AI is safe and beneficial for society as a whole
  • Are excited to transition into empirical AI research and would be interested in a full-time role at Anthropic
  • Have a strong technical background in computer science, mathematics, or physics
  • Thrive in fast-paced, collaborative environments
  • Can implement ideas quickly and communicate clearly

Strong candidates may also have:

  • Strong background in a discipline relevant to a specific Fellows workstream (e.g. economics, social sciences, or cybersecurity)
  • Experience in areas of research or engineering related to their workstream

Candidates must be:

  • Fluent in Python programming
  • Available to work full-time on the Fellows program

Mentors, research areas, & past projects

Fellows will undergo a project selection & mentor matching process. Potential mentors include:

  • Sam Bowman
  • Alex Tamkin
  • Trenton Bricken
  • Collin Burns
  • Samuel Marks
  • Kyle Fish
  • Ethan Perez

Our mentors will lead projects in select AI safety research areas, such as:

  • Scalable Oversight: Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.
  • Adversarial Robustness and AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
  • Model Organisms: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
  • Model Internals / Mechanistic Interpretability: Advancing our understanding of the internal workings of large language models to enable more targeted interventions and safety measures.
  • AI Welfare: Improving our understanding of potential AI welfare and developing related evaluations and mitigations.
  • Open-source circuits: Michael Hanna and Mateusz Piotrowski with mentorship from Emmanuel Ameisen and Jack Lindsey

You might be a particularly great fit for this workstream if you:

  • Are motivated by reducing catastrophic risks from advanced AI systems
  • Have experience with empirical ML research projects
  • Have experience working with large language models
  • Have experience in one of the research areas mentioned above
  • Have a track record of open-source contributions

Logistics

To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.

Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.

Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you need to have or independently obtain full-time work authorization in the UK, the US, or Canada.

Program Duration: The program runs for 4 months, full-time. If you can't commit to the full duration, please still apply and note your constraints in the application. We review these requests on a case‑by‑case basis.

Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25–50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organizations.

Anthropic Fellows Program — AI Safety in London employer: Anthropic

Anthropic is an exceptional employer dedicated to fostering a collaborative and innovative work culture, particularly within the AI safety domain. With access to direct mentorship from leading researchers and a vibrant workspace in London, fellows are empowered to engage in meaningful research while receiving competitive stipends and funding for their projects. The company prioritises employee growth, offering opportunities to transition into full-time roles and connect with a broader community committed to ensuring AI systems are safe and beneficial for society.
Anthropic

Contact Detail:

Anthropic Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Anthropic Fellows Program — AI Safety in London

Tip Number 1

Network like a pro! Reach out to current or past fellows and mentors from the Anthropic program. They can give you insider tips and maybe even put in a good word for you.

Tip Number 2

Show your passion for AI safety! During interviews, share your thoughts on current challenges in AI and how you’d tackle them. This will demonstrate your commitment and understanding of the field.

Tip Number 3

Be ready to discuss your projects! Whether it’s a paper, code, or an open-source contribution, have a few examples up your sleeve that showcase your skills and thought process.

Tip Number 4

Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, you’ll find all the details you need about the program there.

We think you need these skills to ace Anthropic Fellows Program — AI Safety in London

Python Programming
Empirical AI Research
Collaboration Skills
Technical Background in Computer Science
Mathematics
Physics
Experience with Large Language Models
Open-source Contributions
Communication Skills
Motivation for AI Safety
Adaptability in Fast-paced Environments
Research Experience in Relevant Disciplines

Some tips for your application 🫡

Be Yourself: When writing your application, let your personality shine through! We want to get to know the real you, so don’t be afraid to share your passion for AI safety and what motivates you.

Tailor Your Application: Make sure to customise your application for the Anthropic Fellows Program. Highlight your relevant skills and experiences that align with our mission of creating safe and beneficial AI systems.

Show Your Work: If you've worked on any projects related to AI or have contributions to open-source, make sure to include them! We love seeing practical examples of your work and how you approach challenges.

Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to keep track of your application and ensure it gets the attention it deserves.

How to prepare for a job interview at Anthropic

Know Your AI Safety Stuff

Make sure you brush up on the latest trends and research in AI safety. Familiarise yourself with key concepts like scalable oversight and adversarial robustness. This will not only show your passion but also help you engage in meaningful discussions during the interview.

Showcase Your Technical Skills

Since a strong technical background is crucial, be ready to discuss your experience with Python programming and any relevant projects you've worked on. Bring examples of your work, especially if they relate to empirical AI research or large language models, to demonstrate your capabilities.

Be Ready for Collaboration

Anthropic values teamwork, so prepare to talk about your experiences in collaborative environments. Share specific instances where you thrived in a team setting, highlighting how you communicated ideas clearly and implemented solutions quickly.

Ask Thoughtful Questions

Interviews are a two-way street! Prepare some insightful questions about the Fellows program, mentorship opportunities, or ongoing projects at Anthropic. This shows your genuine interest and helps you assess if the role aligns with your career goals.

Anthropic Fellows Program — AI Safety in London
Anthropic
Location: London

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>