At a Glance
- Tasks: Lead a team to research and develop AI control measures against advanced AI risks.
- Company: Join the world's largest AI security team, influencing global AI governance.
- Benefits: Competitive salary, generous leave, remote work options, and professional development support.
- Why this job: Make a real impact on AI safety while collaborating with top-tier researchers and government officials.
- Qualifications: Experience in AI research, leadership skills, and a passion for AI safety.
- Other info: Dynamic work environment with opportunities for growth and collaboration across various sectors.
The predicted salary is between 105000 - 145000 £ per year.
About The AI Security Institute
The AI Security Institute is the world's largest and best‑funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
Team Description
Risks from misaligned AI systems will grow in importance as AI systems become more capable, autonomous, and integrated into society. AI control measures seek to detect, constrain, and/or counteract potentially misaligned AI models; we expect these measures to become increasingly important in the face of capable AI systems that may be unreliable, deceptive, or misaligned. The Control Red Team partners with leading frontier AI companies to stress‑test control measures. The team uses techniques from adversarial ML to develop algorithms to find failures in control measures, which are then used to assess and strengthen control measures. These partnerships allow us to directly influence vital control measures, while our position in government lets us bring our understanding of the state of control measures to broader government as they make critical deployment, research, and policy decisions.
Role Description
We’re looking for an experienced researcher to lead the Control sub‑team, driving its research agenda and managing a team of talented research scientists. The ideal candidate combines deep technical expertise in AI control and alignment with the leadership ability to set direction, develop people, and represent the team's work to senior stakeholders inside and outside government. We expect to offer this role at Level 5–7, with total annual compensation (base salary plus technical allowance) ranging from £105,000 to £145,000.
As Sub‑Team Lead, you will shape the Control sub‑team's strategy and priorities with the Red Team lead, mentor junior and senior researchers, and serve as a key point of contact with frontier AI labs, UK government officials, and international partners. You’ll work closely with the broader Red Team leadership – currently led by Xander Davies and advised by Geoffrey Irving and Yarin Gal – and collaborate with external teams including Redwood Research, Google DeepMind, Anthropic, and OpenAI.
Representative Projects You Might Work On
- Designing, building, running and evaluating methods to automatically attack and evaluate control protocols, such as LLM‑automated attacking and optimisation approaches.
- Building and maintaining infrastructure and benchmarks for AI control experiments, including tools for evaluating the robustness of control measures across diverse threat models.
- Performing adversarial testing of frontier AI system control protocols and producing reports that are impactful and action‑guiding for deployers.
What We’re Looking For
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
You May Be a Good Fit If You Have
- Hands‑on research experience with large language models (LLMs) – such as training, fine‑tuning, evaluation, or safety research.
- A demonstrated track record of peer‑reviewed publications in top‑tier ML conferences or journals.
- Ability and experience writing clean, documented research code for machine learning experiments, including experience with ML frameworks like PyTorch or evaluation frameworks like Inspect.
- A sense of mission, urgency, responsibility for success.
- An ability to bring your own research ideas and work in a self‑directed way, while also collaborating effectively and prioritising team efforts over extensive solo work.
Strong Candidates May Also Have
- Experience working on AI alignment or AI control.
- Experience working on adversarial robustness, other areas of AI security, or red teaming against any kind of system.
- Extensive experience writing production quality code.
- Desire to and experience with improving our team through mentoring and feedback.
- Experience designing, shipping, and maintaining complex technical products.
Selection Process
The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross‑section of our team at AISI (including non‑technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior leadership team here at AISI. Candidates should expect to go through some or all of the following stages once an application has been submitted:
- Initial assessment
- Initial screening call
- Research interview
- Technical assessment
- Behavioural interview
- Final interview with members of the senior leadership team
What We Offer Impact You Couldn’t Have Anywhere Else
- Incredibly talented, mission‑driven and supportive colleagues.
- Direct influence on how frontier AI is governed and deployed globally.
- Work with the Prime Minister’s AI Advisor and leading AI companies.
- Opportunity to shape the first & best‑resourced public‑interest research team focused on AI security.
Resources & Access
- Pre‑release access to multiple frontier models and ample compute.
- Extensive operational support so you can focus on research and ship quickly.
- Work with experts across national security, policy, AI research and adjacent sciences.
Growth & Autonomy
- If you’re talented and driven, you’ll own important problems early.
- 5 days off and annual stipends for learning and development, and funding for conferences and external collaborations.
- Freedom to pursue research bets without product pressure.
- Opportunities to publish and collaborate externally.
Life & Family
- Modern central London office (cafes, food court, gym), or where applicable, option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.
- Hybrid working, flexibility for occasional remote work abroad and stipends for work‑from‑home equipment.
- At least 25 days’ annual leave, 8 public holidays, extra team‑wide breaks and 3 days off for volunteering.
- Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
- On top of your salary, we contribute 28.97% of your base salary to your pension.
- Discounts and benefits for cycling to work, donations and retail/gyms. These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Salary
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take‑home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary. This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The Full Range of Salaries Are Available Below
- Level 3: £65,000–£75,000 (Base £35,720 + Technical Allowance £29,280–£39,280)
- Level 4: £85,000–£95,000 (Base £42,495 + Technical Allowance £42,505–£52,505)
- Level 5: £105,000–£115,000 (Base £55,805 + Technical Allowance £49,195–£59,195)
- Level 6: £125,000–£135,000 (Base £68,770 + Technical Allowance £56,230–£66,230)
- Level 7: £145,000 (Base £68,770 + Technical Allowance £76,230)
Use of AI in Applications
Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action.
Nationality Requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements.
Working for the Civil Service
The Civil Service Code sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles. The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.
Diversity and Inclusion
The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan and the Civil Service Diversity and Inclusion Strategy.
Sub Team Lead - Red Team (Control) employer: AI Security Institute
Contact Detail:
AI Security Institute Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Sub Team Lead - Red Team (Control)
✨Tip Number 1
Network like a pro! Reach out to folks in the AI security space, especially those connected to the AI Security Institute. Attend events, webinars, or even local meetups to get your name out there and make some valuable connections.
✨Tip Number 2
Show off your skills! Prepare a portfolio of your research work, especially anything related to AI control or adversarial ML. When you get the chance to chat with potential employers, having tangible examples of your work can really set you apart.
✨Tip Number 3
Practice makes perfect! Get ready for those interviews by doing mock sessions with friends or mentors. Focus on articulating your thoughts clearly about AI risks and control measures, as well as your leadership style and how you can contribute to the team.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in being part of our mission at the AI Security Institute.
We think you need these skills to ace Sub Team Lead - Red Team (Control)
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your application to highlight how your skills and experiences align with the role of Sub Team Lead. Use keywords from the job description to show that you understand what we're looking for.
Showcase Your Research Experience: Since this role is all about leading research, don’t hold back on detailing your hands-on experience with large language models and any relevant publications. We want to see your passion and expertise shine through!
Be Clear and Concise: When writing your application, clarity is key! Keep your sentences straightforward and avoid jargon unless it’s necessary. We appreciate a well-structured application that’s easy to read.
Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way to ensure it gets into the right hands. Plus, we love seeing candidates who follow instructions!
How to prepare for a job interview at AI Security Institute
✨Know Your AI Control Inside Out
Make sure you brush up on the latest research and techniques in AI control and alignment. Familiarise yourself with adversarial ML methods and be ready to discuss how they can be applied to stress-test control measures. This will show your deep technical expertise and passion for the field.
✨Showcase Your Research Experience
Prepare to talk about your hands-on experience with large language models, including any training, fine-tuning, or safety research you've conducted. Bring along examples of your peer-reviewed publications and be ready to explain your contributions clearly. This will demonstrate your credibility and commitment to advancing AI safety.
✨Demonstrate Leadership Skills
As a Sub-Team Lead, you'll need to show that you can mentor and guide others. Think of specific instances where you've led a project or supported junior researchers. Highlight your ability to set direction and collaborate effectively, as this is crucial for the role.
✨Prepare for Technical Assessments
Expect some technical proficiency tests during the interview process. Brush up on writing clean, documented research code and be familiar with ML frameworks like PyTorch. Practising coding challenges can help you feel more confident and ready to tackle any technical questions thrown your way.