Anthropic Fellows Program — AI Safety
Anthropic Fellows Program — AI Safety

Anthropic Fellows Program — AI Safety

Full-Time 92400 - 92400 £ / year (est.) No home office possible
Anthropic

At a Glance

  • Tasks: Conduct AI safety research and collaborate with top mentors on impactful projects.
  • Company: Join Anthropic, a leader in creating safe and beneficial AI systems.
  • Benefits: Receive a competitive stipend, funding for research, and access to a vibrant community.
  • Other info: Work in dynamic locations like London or Berkeley, with potential for full-time roles.
  • Why this job: Make a real difference in AI safety while developing your skills in a supportive environment.
  • Qualifications: Fluency in Python and a strong technical background in relevant fields.

The predicted salary is between 92400 - 92400 £ per year.

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

The next cohort of Anthropic fellows starts on July 20, 2026. The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience. Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts, over 80% of fellows produced papers.

What to expect

  • 4 months of full-time research
  • Direct mentorship from Anthropic researchers
  • Access to a shared workspace (in either Berkeley, California or London, UK)
  • Connection to the broader AI safety and security research community
  • Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD + benefits (these vary by country)
  • Funding for compute (~$15k/month) and other research expenses

Compensation

The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).

Fellows workstreams

Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams. Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application.

You can see an overview of the current workstreams below:

Across the workstreams, you may be a good fit if you:

  • Are motivated by making sure AI is safe and beneficial for society as a whole
  • Are excited to transition into empirical AI research and would be interested in a full-time role at Anthropic
  • Have a strong technical background in computer science, mathematics, or physics
  • Thrive in fast-paced, collaborative environments
  • Can implement ideas quickly and communicate clearly

Strong candidates may also have:

  • Strong background in a discipline relevant to a specific Fellows workstream (e.g. economics, social sciences, or cybersecurity)
  • Experience in areas of research or engineering related to their workstream

Candidates must be:

  • Fluent in Python programming
  • Available to work full-time on the Fellows program

Mentors, research areas, & past projects

Fellows will undergo a project selection & mentor matching process. Potential mentors include:

  • Sam Bowman
  • Alex Tamkin
  • Trenton Bricken
  • Collin Burns
  • Samuel Marks
  • Kyle Fish
  • Ethan Perez

Our mentors will lead projects in select AI safety research areas, such as:

  • Scalable Oversight: Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.
  • Adversarial Robustness and AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
  • Model Organisms: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
  • Model Internals / Mechanistic Interpretability: Advancing our understanding of the internal workings of large language models to enable more targeted interventions and safety measures.
  • AI Welfare: Improving our understanding of potential AI welfare and developing related evaluations and mitigations.
  • Open-source circuits: Michael Hanna and Mateusz Piotrowski with mentorship from Emmanuel Ameisen and Jack Lindsey

You might be a particularly great fit for this workstream if you:

  • Are motivated by reducing catastrophic risks from advanced AI systems
  • Have experience with empirical ML research projects
  • Have experience working with large language models
  • Have experience in one of the research areas mentioned above
  • Have a track record of open-source contributions

Logistics

To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.

Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.

Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you need to have or independently obtain full-time work authorization in the UK, the US, or Canada.

Program Duration: The program runs for 4 months, full-time. If you can't commit to the full duration, please still apply and note your constraints in the application. We review these requests on a case‑by‑case basis.

Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25–50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organizations.

Anthropic Fellows Program — AI Safety employer: Anthropic

Anthropic is an exceptional employer dedicated to fostering AI research and engineering talent through its Anthropic Fellows Program. With a strong emphasis on mentorship, collaboration, and access to cutting-edge resources, fellows benefit from a supportive work culture that prioritises safety and societal impact in AI. Located in vibrant cities like London and Berkeley, the programme offers unique opportunities for professional growth and connection within the broader AI safety community, making it an ideal environment for aspiring researchers.
Anthropic

Contact Detail:

Anthropic Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Anthropic Fellows Program — AI Safety

Tip Number 1

Network like a pro! Connect with current and past fellows, mentors, and anyone in the AI safety community. Use platforms like LinkedIn to reach out and ask for insights about their experiences. You never know who might have a lead on opportunities!

Tip Number 2

Show your passion for AI safety! When you get the chance to chat with someone from Anthropic, make sure to express why you care about making AI safe and beneficial. Share your ideas and enthusiasm; it can really set you apart from other candidates.

Tip Number 3

Prepare for interviews by diving deep into AI safety topics. Familiarise yourself with current research and challenges in the field. Being able to discuss these intelligently will show that you're not just interested but also knowledgeable and ready to contribute.

Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, keep an eye on any upcoming events or webinars hosted by Anthropic; they’re great for learning more and making connections.

We think you need these skills to ace Anthropic Fellows Program — AI Safety

Python Programming
Empirical AI Research
Collaboration Skills
Technical Background in Computer Science
Mathematics
Physics
Experience with Large Language Models
Open-source Contributions
Motivation for AI Safety
Communication Skills
Adaptability in Fast-paced Environments
Research Experience in Relevant Areas

Some tips for your application 🫡

Be Yourself: When writing your application, let your personality shine through! We want to get to know the real you, so don’t be afraid to share your passion for AI safety and what motivates you.

Tailor Your Application: Make sure to customise your application for the Anthropic Fellows Program. Highlight your relevant skills and experiences that align with our mission of creating safe and beneficial AI systems.

Show Your Work: If you've worked on any projects related to AI or have contributions to open-source, make sure to include them! We love seeing practical examples of your work and how you approach challenges.

Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to keep track of your application and ensure it gets the attention it deserves.

How to prepare for a job interview at Anthropic

Know Your AI Safety Stuff

Make sure you brush up on the latest trends and research in AI safety. Familiarise yourself with key concepts like scalable oversight and adversarial robustness. This will not only show your passion for the field but also help you engage in meaningful discussions during the interview.

Show Off Your Technical Skills

Since a strong technical background is crucial, be ready to discuss your experience with Python programming and any relevant projects you've worked on. Prepare to explain your thought process and problem-solving approach, especially if you've tackled empirical ML research or worked with large language models.

Be Ready to Collaborate

Anthropic values teamwork, so highlight your experiences in collaborative environments. Share examples of how you've worked effectively with others to achieve common goals, and express your excitement about contributing to a fast-paced team focused on making AI safe and beneficial.

Ask Thoughtful Questions

Prepare some insightful questions about the Fellows program and the specific workstreams you're interested in. This shows your genuine interest in the role and helps you understand how you can contribute to Anthropic's mission. Plus, it gives you a chance to connect with your interviewers on a deeper level.

Anthropic Fellows Program — AI Safety
Anthropic

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>