Research Scientist - Red Team in Slough

Research Scientist - Red Team in Slough

Slough Full-Time 65000 - 145000 £ / year (est.) No home office possible
Go Premium
A

At a Glance

  • Tasks: Research and develop cutting-edge AI security measures to protect against misuse and misalignment.
  • Company: Join the world's leading team at the AI Security Institute, shaping AI governance.
  • Benefits: Competitive salary, generous leave, remote work options, and professional development opportunities.
  • Why this job: Make a real impact on AI safety while collaborating with top experts in the field.
  • Qualifications: Experience with large language models and a strong publication record in AI research.
  • Other info: Dynamic work environment with opportunities for growth and collaboration across global teams.

The predicted salary is between 65000 - 145000 £ per year.

About The AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

Team Description

Interventions that secure a system from abuse by bad actors or misaligned AI systems will grow in importance as AI systems become more capable, autonomous, and integrated into society. The AI Security Institute’s Red Team researches these interventions across three sub-teams (misuse, alignment, and control): we evaluate the protections on current frontier AI systems and research what measures could better secure them in the future. We share our findings with frontier AI companies, key UK officials, and other governments in order to inform their respective deployment, research, and policy decision-making.

We have published on several topics, including agent misuse, defending finetuning APIs, third-party attacks on agents, safeguards safety cases, and attacks on layered defenses, a library for running AI control experiments, and pre-deployment testing of misalignment risks. Some example impacts have been advancing the benchmarking of agent misuse, identifying vulnerabilities previously unknown to frontier AI companies, and producing insights into the feasibility and effectiveness of attacks and defences in data poisoning and fine-tuning APIs.

In our team, you can meaningfully advance both research on how to attack and defend frontier AI systems, as well as government understanding of misuse and misalignment risks, which we see as critical to the safe and secure deployment of advanced AI.

Role Description

We’re looking for researchers across our misuse, alignment, and control subteams with expertise developing and analysing attacks and protections for systems based on large language models or who have broader experience with frontier LLM research and development. An ideal candidate would have a strong record of performing and publishing novel and impactful research in these or other areas of LLM research.

We’re primarily looking for research scientists, but we can support staff’s work spanning or alternating between research and engineering. The broader team's work includes research – like assessing the threats to frontier systems, performing novel adversarial ML research on frontier LLMs, and developing novel attacks – and engineering, such as building infrastructure for running evaluations.

The team is currently led by Xander Davies and advised by Geoffrey Irving and Yarin Gal. You’ll work with incredible technical staff across AISI, including alumni from Anthropic, OpenAI, DeepMind, and top universities. You may also collaborate with external teams like Anthropic, OpenAI, and Gray Swan.

We are open to hires at junior, senior, staff and principal research scientist levels.

Representative projects you might work on:

  • Designing, building, running and evaluating methods to automatically attack and evaluate safeguards, such as LLM-automated attacking and direct optimisation approaches.
  • Designing and running experiments that test measures to keep AI systems under human control even when they might be misaligned.
  • Building a benchmark for asynchronous monitoring for signs of misuse and jailbreak development across multiple model interactions.
  • Investigating novel attacks and defences for data poisoning LLMs with backdoors or other attacker goals.
  • Performing adversarial testing of frontier AI system safeguards and produce reports that are impactful and action-guiding for safeguard developers.

What We’re Looking For

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required Experience

The experiences listed below should be interpreted as examples of the expertise we’re looking for, as opposed to a list of everything we expect to find in one applicant:

You May Be a Good Fit If You Have:

  • Hands-on research experience with large language models (LLMs) - such as training, fine-tuning, evaluation, or safety research.
  • A demonstrated track record of peer-reviewed publications in top-tier ML conferences or journals.
  • Ability and experience writing clean, documented research code for machine learning experiments, including experience with ML frameworks like PyTorch or evaluation frameworks like Inspect.
  • A sense of mission, urgency, responsibility for success.
  • An ability to bring your own research ideas and work in a self-directed way, while also collaborating effectively and prioritising team efforts over extensive solo work.

Strong Candidates May Also Have:

  • Experience working on adversarial robustness, other areas of AI security, or red teaming against any kind of system.
  • Experience working on AI alignment or AI control.
  • Extensive experience writing production quality code.
  • Desire to and experience with improving our team through mentoring and feedback.
  • Experience designing, shipping, and maintaining complex technical products.

What We Offer

Impact you couldn't have anywhere else:

  • Incredibly talented, mission-driven and supportive colleagues.
  • Direct influence on how frontier AI is governed and deployed globally.
  • Work with the Prime Minister’s AI Advisor and leading AI companies.
  • Opportunity to shape the first & best-resourced public-interest research team focused on AI security.

Resources & access:

  • Pre-release access to multiple frontier models and ample compute.
  • Extensive operational support so you can focus on research and ship quickly.
  • Work with experts across national security, policy, AI research and adjacent sciences.

Growth & autonomy:

  • If you’re talented and driven, you’ll own important problems early.
  • 5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations.
  • Freedom to pursue research bets without product pressure.
  • Opportunities to publish and collaborate externally.

Life & family:

  • Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.
  • Hybrid working, flexibility for occasional remote work abroad and stipends for work-from-home equipment.
  • At least 25 days’ annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering.
  • Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
  • On top of your salary, we contribute 28.97% of your base salary to your pension.
  • Discounts and benefits for cycling to work, donations and retail/gyms. These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.

Salary:

Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top.

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

The Full Range Of Salaries Are As Follows:

  • Level 3: £65,000–£75,000 (Base £35,720 + Technical Allowance £29,280–£39,280)
  • Level 4: £85,000–£95,000 (Base £42,495 + Technical Allowance £42,505–£52,505)
  • Level 5: £105,000–£115,000 (Base £55,805 + Technical Allowance £49,195–£59,195)
  • Level 6: £125,000–£135,000 (Base £68,770 + Technical Allowance £56,230–£66,230)
  • Level 7: £145,000 (Base £68,770 + Technical Allowance £76,230)

Selection process:

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior leadership team here at AISI.

Candidates should expect to go through some or all of the following stages once an application has been submitted:

  • Initial assessment
  • Initial screening call
  • Research interview
  • Technical assessment
  • Behavioural interview
  • Final interview with members of the senior leadership team

Additional Information:

Internal Fraud Database: The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisation. DLUHC then carry out the pre-employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.

Security: Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement.

Nationality requirements: We may be able to offer roles to applicants from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements.

Working for the Civil Service: The Civil Service Code sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles. The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion: The Civil Service is committed to attract, retain and invest in talent wherever it is found.

Research Scientist - Red Team in Slough employer: AI Security Institute

The AI Security Institute is an exceptional employer, offering a unique opportunity to work at the forefront of AI security in a dynamic and supportive environment. With direct influence on global AI governance and access to cutting-edge resources, employees can expect meaningful contributions to critical research while enjoying generous benefits, including extensive professional development support and a flexible work-life balance in modern offices across the UK. Join a mission-driven team of talented professionals dedicated to shaping the future of AI safety and security.
A

Contact Detail:

AI Security Institute Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Research Scientist - Red Team in Slough

✨Tip Number 1

Network like a pro! Reach out to folks in the AI security space, especially those connected to the AI Security Institute. Attend relevant meetups or conferences and don’t be shy about introducing yourself. You never know who might have a lead on your dream job!

✨Tip Number 2

Show off your skills! Prepare a portfolio of your research work, publications, and any projects related to large language models. When you get the chance to chat with potential employers, share your insights and findings. It’s a great way to demonstrate your expertise and passion.

✨Tip Number 3

Practice makes perfect! Get ready for interviews by brushing up on common technical questions related to AI security and red teaming. Consider doing mock interviews with friends or colleagues to build confidence and refine your answers.

✨Tip Number 4

Apply through our website! We’re always on the lookout for talented individuals who can contribute to our mission. Don’t hesitate to submit your application directly through the AI Security Institute’s careers page – it’s the best way to get noticed!

We think you need these skills to ace Research Scientist - Red Team in Slough

Research Experience with Large Language Models (LLMs)
Peer-Reviewed Publications
Machine Learning Frameworks (e.g., PyTorch)
Evaluation Frameworks (e.g., Inspect)
Adversarial Robustness
AI Security Knowledge
AI Alignment and Control
Production Quality Code Writing
Mentoring and Feedback
Experiment Design and Execution
Data Poisoning Defence Strategies
Collaboration Skills
Self-Directed Research
Impactful Reporting

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter for the Research Scientist role. Highlight your relevant experience with large language models and any impactful research you've done. We want to see how your skills align with our mission!

Showcase Your Research: Include details about your peer-reviewed publications and any novel research you've conducted. We love seeing candidates who can demonstrate a strong track record in AI security and frontier LLMs, so don’t hold back!

Be Clear and Concise: When writing your application, keep it straightforward and to the point. Use clear language to describe your experiences and achievements. We appreciate clarity, and it helps us understand your qualifications better!

Apply Through Our Website: Don’t forget to submit your application through our official website! It’s the best way to ensure we receive your materials directly and can consider you for this exciting opportunity at the AI Security Institute.

How to prepare for a job interview at AI Security Institute

✨Know Your Research Inside Out

Make sure you’re well-versed in your own research and any relevant publications. Be prepared to discuss your findings, methodologies, and the implications of your work. This shows not only your expertise but also your passion for the field.

✨Understand the Role and Team Dynamics

Familiarise yourself with the AI Security Institute's mission and the specific sub-team you'll be joining. Knowing how your skills fit into their objectives will help you articulate your value during the interview.

✨Prepare for Technical Questions

Expect technical proficiency tests and questions about your experience with large language models and adversarial ML. Brush up on relevant frameworks like PyTorch and be ready to discuss your coding practices and past projects.

✨Showcase Collaboration Skills

Highlight your ability to work in a team and mentor others. The role requires collaboration across various teams, so share examples of how you've successfully worked with others to achieve common goals.

Research Scientist - Red Team in Slough
AI Security Institute
Location: Slough
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

A
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>