Research Engineer/Research Scientist - Red Team (Misuse)
Research Engineer/Research Scientist - Red Team (Misuse)

Research Engineer/Research Scientist - Red Team (Misuse)

Full-Time 65000 - 145000 £ / year (est.) No home office possible
AI Security Institute

At a Glance

  • Tasks: Join our Red Team to research and develop advanced AI security measures.
  • Company: Be part of the world's leading AI Security Institute, shaping global AI governance.
  • Benefits: Enjoy competitive salary, generous leave, and professional development opportunities.
  • Why this job: Make a real impact on AI safety while collaborating with top experts in the field.
  • Qualifications: Experience with large language models and a strong publication record are essential.
  • Other info: Flexible working options and a vibrant office environment in central London.

The predicted salary is between 65000 - 145000 £ per year.

About the AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

Team Description

Interventions that secure a system from abuse by bad actors or misaligned AI systems will grow in importance as AI systems become more capable, autonomous, and integrated into society. The Misuse Red Team is a specialised sub-team within AISI's wider Red Team. We red-team frontier AI safeguards for dangerous capabilities, research novel attack vectors, and develop advanced automated attack tooling. We share our findings with frontier AI companies (including Anthropic, OpenAI, DeepMind), key UK officials, and other governments to inform their respective deployment, research, and policy decision-making.

We have published on several topics, including novel automated attack algorithms (Boundary Point Jailbreaking), poisoning attacks, safeguards safety cases, defending fine-tuning APIs, third-party attacks on agents, agent misuse, and pre-training data filtering. Some example impact cases have been advancing the benchmarking of agent misuse, identifying novel vulnerabilities and collaborating with frontier labs to mitigate them, and producing insights into the feasibility and effectiveness of attacks and defences in data poisoning and fine-tuning APIs.

We’re looking for research scientists and research engineers for our misuse sub-team with expertise developing and analysing attacks and protections for systems based on large language models or who have broader experience with frontier LLM research and development. An ideal candidate would have a strong track record of performing and publishing novel and impactful research in these or other areas of LLM research.

We’re looking for:

  • Research Scientists, who typically lead technical direction – picking the questions, designing the experiments, and owning the conclusions (typically evidenced by a strong publication record).
  • Research Engineers, who typically lead execution – building the systems and code that make those experiments possible at scale, and owning reliability, speed, and reproducibility.

In practice, we can support staff’s work spanning or alternating between research and engineering. If you have a preference, please specify this in your application. The team is currently led by Eric Winsor and Xander Davies – advised by Geoffrey Irving and Yarin Gal. You’ll work with incredible technical staff across AISI, including alumni from Anthropic, OpenAI, DeepMind, and top universities. You may also collaborate with external teams from Anthropic, OpenAI, and Gray Swan. We are open to hires at junior, senior, staff and principal research scientist levels.

Representative projects you might work on:

  • Designing, building, running and evaluating methods to automatically attack and evaluate safeguards, such as LLM-automated attacking and direct optimisation approaches.
  • Building a benchmark for asynchronous monitoring for signs of misuse and jailbreak development across multiple model interactions.
  • Investigating novel attacks and defences for data poisoning LLMs with backdoors or other attacker goals.
  • Performing adversarial testing of frontier AI system safeguards and producing reports that are impactful and action-guiding for safeguard developers.

What we’re looking for:

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The experiences listed below should be interpreted as examples of the expertise we’re looking for, as opposed to a list of everything we expect to find in one applicant:

You may be a good fit if you have:

  • Hands-on research experience with large language models (LLMs) - such as training, fine-tuning, evaluation, or safety research.
  • A demonstrated track record of peer-reviewed publications in top-tier ML conferences or journals.
  • Ability and experience writing clean, documented research code for machine learning experiments, including experience with ML frameworks like PyTorch or evaluation frameworks like Inspect.
  • A sense of mission, urgency, responsibility for success.
  • An ability to bring your own research ideas and work in a self-directed way, while also collaborating effectively and prioritising team efforts over extensive solo work.

Strong candidates may also have:

  • Experience working on adversarial robustness, other areas of AI security, or red teaming against any kind of system.
  • Experience working on AI alignment or AI control.
  • Extensive experience writing production quality code.
  • Desire to and experience with improving our team through mentoring and feedback.
  • Experience designing, shipping, and maintaining complex technical products.

What We Offer:

  • Impact you couldn’t have anywhere else.
  • Incredibly talented, mission-driven and supportive colleagues.
  • Direct influence on how frontier AI is governed and deployed globally.
  • Work with the Prime Minister’s AI Advisor and leading AI companies.
  • Opportunity to shape the first & best-resourced public-interest research team focused on AI security.

Resources & access:

  • Pre-release access to multiple frontier models and ample compute.
  • Extensive operational support so you can focus on research and ship quickly.
  • Work with experts across national security, policy, AI research and adjacent sciences.
  • If you’re talented and driven, you’ll own important problems early.
  • 5 days off and annual stipends for learning and development, and funding for conferences and external collaborations.
  • Freedom to pursue research bets without product pressure.
  • Opportunities to publish and collaborate externally.

Life & family:

  • Modern central London office (cafes, food court, gym), or where applicable, option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.
  • Hybrid working, flexibility for occasional remote work abroad and stipends for work-from-home equipment.
  • At least 25 days’ annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering.
  • Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
  • On top of your salary, we contribute 28.97% of your base salary to your pension.
  • Discounts and benefits for cycling to work, donations and retail/gyms.

* These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.

Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary. This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

Selection process:

The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior leadership team here at AISI. Candidates should expect to go through some or all of the following stages once an application has been submitted:

  • Initial assessment
  • Initial screening call
  • Research interview
  • Technical assessment
  • Behavioural interview
  • Final interview with members of the senior leadership team

Additional Information:

Use of AI in Applications: Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action.

Internal Fraud Database: The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisation. DLUHC then carry out the pre-employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.

For more information please see - Internal Fraud Register. The Civil Service Code sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles. The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Research Engineer/Research Scientist - Red Team (Misuse) employer: AI Security Institute

The AI Security Institute is an exceptional employer, offering a unique opportunity to work at the forefront of AI safety and governance in London. With a mission-driven culture, employees benefit from direct influence on global AI policy, access to cutting-edge resources, and a supportive environment that fosters collaboration and innovation. The institute prioritises employee growth through generous learning stipends, flexible working arrangements, and a commitment to work-life balance, making it an ideal place for those passionate about impactful research in AI security.
AI Security Institute

Contact Detail:

AI Security Institute Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Research Engineer/Research Scientist - Red Team (Misuse)

✨Tip Number 1

Network like a pro! Reach out to folks in the AI security space, especially those connected to the AI Security Institute. Attend meetups or webinars, and don’t be shy about sliding into DMs on LinkedIn. You never know who might have the inside scoop on job openings!

✨Tip Number 2

Show off your skills! If you’ve got research or projects related to LLMs, make sure to highlight them in conversations. Bring your portfolio to interviews or share links to your publications. We love seeing what you can do!

✨Tip Number 3

Prepare for technical chats! Brush up on your knowledge of adversarial robustness and AI security. We want to see how you think on your feet, so practice explaining your past work and how it relates to the role. Mock interviews can be super helpful!

✨Tip Number 4

Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our mission at the AI Security Institute. Don’t miss out on this opportunity!

We think you need these skills to ace Research Engineer/Research Scientist - Red Team (Misuse)

Research Experience with Large Language Models (LLMs)
Peer-Reviewed Publication Record
Machine Learning Frameworks (e.g., PyTorch)
Evaluation Frameworks (e.g., Inspect)
Adversarial Robustness
AI Security Knowledge
Red Teaming Experience
AI Alignment and Control
Production Quality Code Writing
Technical Product Design and Maintenance
Collaboration and Teamwork
Self-Directed Research
Experiment Design and Execution
Impactful Reporting

Some tips for your application 🫡

Show Your Passion: When you're writing your application, let your enthusiasm for AI security shine through! We want to see why you’re excited about joining the Misuse Red Team and how your background aligns with our mission.

Tailor Your Experience: Make sure to highlight your relevant experience with large language models and any research you've done in AI security. We love seeing specific examples that demonstrate your skills and how they relate to the role.

Be Clear and Concise: Keep your application straightforward and to the point. We appreciate clarity, so avoid jargon unless it’s necessary. Make it easy for us to see your qualifications and fit for the team!

Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to receive your details and ensures you’re considered for the role. We can’t wait to hear from you!

How to prepare for a job interview at AI Security Institute

✨Know Your Stuff

Make sure you’re well-versed in the latest research and developments in large language models (LLMs). Brush up on your knowledge of adversarial robustness and AI security, as these are key areas for the role. Being able to discuss recent publications or breakthroughs will show your passion and expertise.

✨Showcase Your Projects

Prepare to talk about your previous work, especially any hands-on experience with LLMs or related projects. Highlight specific challenges you faced, how you overcame them, and the impact of your work. This will demonstrate your problem-solving skills and ability to contribute to the team.

✨Collaborative Spirit

Emphasise your ability to work in a team. The Misuse Red Team values collaboration, so be ready to share examples of how you’ve successfully worked with others in past projects. Discuss how you prioritise team goals over individual achievements to align with their mission-driven culture.

✨Ask Insightful Questions

Prepare thoughtful questions about the team’s current projects and future directions. This shows your genuine interest in the role and helps you gauge if the team’s objectives align with your career aspirations. It’s also a great way to engage with your interviewers and leave a lasting impression.

Research Engineer/Research Scientist - Red Team (Misuse)
AI Security Institute

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>