At a Glance
- Tasks: Join a team researching AI security and develop innovative solutions to protect against misuse.
- Company: The AI Security Institute, the leading organisation in AI risk management.
- Benefits: Competitive salary, generous leave, hybrid working, and professional development opportunities.
- Why this job: Make a real impact on global AI governance and work with top experts in the field.
- Qualifications: Experience with large language models and a strong publication record in AI research.
- Other info: Collaborative environment with excellent growth potential and access to cutting-edge resources.
The predicted salary is between 65000 - 145000 £ per year.
About The AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We are embedded in the UK government with direct lines to No. 10 (the Prime Minister's office) and work with frontier developers and governments globally. Our aim is to mobilise governments to ensure advanced AI develops safely.
Team Description
Interventions that secure a system from abuse by bad actors or misaligned AI systems will grow in importance as AI systems become more capable and autonomous. The AI Security Institute’s Red Team researches these interventions across three sub-teams: misuse, alignment, and control. We evaluate the protections on current frontier AI systems, research what measures could better secure them in the future, and share our findings with frontier AI companies, UK officials, and other governments.
We have published on several topics, including agent misuse, defending fine-tuning APIs, third-party attacks on agents, and safeguards safety cases. Our work has advanced benchmarking of agent misuse, identified previously unknown vulnerabilities and produced insights into the feasibility of attacks and defences in data poisoning and fine-tuning APIs.
Role Description
We are looking for researchers across the misuse, alignment, and control sub-teams with expertise in developing and analysing attacks and protections for large-language-model-based systems, or broader experience with frontier LLM research. An ideal candidate has a strong record of performing and publishing novel, impactful research in these or related areas.
We primarily seek research scientists, but we also support staff working between research and engineering. The team’s work spans research (assessing threats, performing adversarial ML research, developing novel attacks) and engineering (building infrastructure for running evaluations).
The team is led by Xander Davies and advised by Geoffrey Irving and Yarin Gal. You will work with technical staff across AISI, including alumni from Anthropic, OpenAI, DeepMind, and leading universities, and may collaborate with external teams such as Anthropic, OpenAI, and Gray Swan.
Open positions span junior, senior, staff, and principal research scientist levels.
Representative Projects
- Design, build, run and evaluate methods to automatically attack and evaluate safeguards such as LLM-automated attacking and direct optimisation approaches.
- Design and run experiments that test measures to keep AI systems under human control even when they may be misaligned.
- Build a benchmark for asynchronous monitoring of misuse and jailbreak development across multiple model interactions.
- Investigate novel attacks and defences for data-poisoning LLMs with backdoors or other attacker goals.
- Perform adversarial testing of frontier AI system safeguards and produce reports that are impactful and action-guiding for safeguard developers.
What We’re Looking For
In accordance with Civil Service Commission rules, the following criteria will be used in the interview process.
Required Experience
- Hands-on research experience with large language models (training, fine-tuning, evaluation, or safety research).
- A demonstrated track record of peer-reviewed publications at top-tier ML conferences or journals.
- Experience writing clean, documented research code for machine learning experiments, including frameworks such as PyTorch or evaluation tools like Inspect.
- A sense of mission, urgency, and responsibility for success.
- Ability to bring your own research ideas and work in a self-directed way while collaborating effectively and prioritising team efforts over extensive solo work.
Strong Candidates May Also Have
- Experience in adversarial robustness, other AI security areas, or red-teaming against any system.
- Experience in AI alignment or AI control.
- Extensive experience writing production-quality code.
- Desire and experience with mentoring and feedback to improve the team.
- Experience designing, shipping, and maintaining complex technical products.
What We Offer
Impact you couldn't have elsewhere. Incredibly talented, mission-driven, supportive colleagues. Direct influence on how frontier AI is governed and deployed globally. Work with the Prime Minister’s AI Advisor and leading AI companies. Opportunity to shape the first and best-resourced public-interest research team focused on AI security.
Resources & Access
Pre-release access to multiple frontier models and ample compute. Extensive operational support so you can focus on research and ship quickly. Collaboration with experts across national security, policy, AI research and adjacent sciences.
Growth & Autonomy
Own important problems early if you are talented and driven. 5 days off for learning and development, annual stipends for external learning and conference funding. Freedom to pursue research bets without product pressure. Opportunities to publish and collaborate externally.
Life & Family
Modern central London office (cafes, food court, gym) or office in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol. Hybrid working, flexibility for occasional remote work abroad and stipends for work-from-home equipment. At least 25 days annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering. Generous paid parental leave (36 weeks statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time). Pension contribution of 28.97% of base salary. Discounts and benefits for cycling to work, donations, and retail/gyms. Benefits for direct employees; may differ for secondments.
Salary
Annual salary is benchmarked to role scope and relevant experience. Most offers range between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top. This role sits outside the DDaT pay framework.
The Full Range Of Salaries Are As Follows
- Level 3: £65,000-£75,000 (Base £35,720 + Technical Allowance £29,280-£39,280)
- Level 4: £85,000-£95,000 (Base £42,495 + Technical Allowance £42,505-£52,505)
- Level 5: £105,000-£115,000 (Base £55,805 + Technical Allowance £49,195-£59,195)
- Level 6: £125,000-£135,000 (Base £68,770 + Technical Allowance £56,230-£66,230)
- Level 7: £145,000 (Base £68,770 + Technical Allowance £76,230)
Selection Process
Typical stages include: Initial assessment, Initial screening call, Research interview, Technical assessment, Behavioural interview, Final interview with senior leadership team.
Additional Information
Use of AI in Applications: Artificial Intelligence can support your application, but all statements must be truthful, factually accurate and directly from your own experience. Plagiarism or using AI to generate your own experiences may result in withdrawal of your application.
Security: Successful candidates must undergo a criminal record check and complete baseline personnel security standard (BPSS) clearance. Preference is given to those eligible for counter-terrorist check (CTC). Some roles may require higher clearance levels, which we will specify in the announcement.
Nationality Requirements: We consider applicants from any nationality or background. Please apply even if you do not meet standard nationality requirements.
Working for the Civil Service: The Civil Service Code sets the standards of behaviour expected of civil servants. Recruitment is by merit on the basis of fair and open competition. We embrace diversity and promote equal opportunities, including a Disability Confident Scheme and a Redeployment Interview Scheme for those at risk of redundancy. The Civil Service also offers inclusive recruitment practices for disabled candidates.
Diversity and Inclusion: The Civil Service is committed to attracting, retaining and investing in talent wherever it is found.
Research Scientist - Red Team in London employer: CHEManager International
Contact Detail:
CHEManager International Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Research Scientist - Red Team in London
✨Tip Number 1
Network like a pro! Reach out to folks in the AI security space, especially those connected to the AI Security Institute. Attend events, webinars, or even local meetups to get your name out there and show your passion for the field.
✨Tip Number 2
Prepare for those interviews! Brush up on your knowledge of large language models and be ready to discuss your past research. We want to see your thought process, so practice explaining your work clearly and confidently.
✨Tip Number 3
Showcase your skills! If you’ve got any projects or papers, make sure to have them handy. We love seeing real examples of your work, especially if they relate to adversarial ML or AI safety. It’s all about demonstrating your expertise!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re serious about joining our mission-driven team at the AI Security Institute.
We think you need these skills to ace Research Scientist - Red Team in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your relevant experience in AI security and research. We want to see how your skills align with our mission at the AI Security Institute, so don’t hold back on showcasing your expertise!
Showcase Your Research: Include details about your past research projects, especially those related to large language models or adversarial ML. We love seeing peer-reviewed publications, so make sure to mention any that you've contributed to – it really helps us gauge your impact in the field.
Be Clear and Concise: When writing your application, clarity is key! Use straightforward language and avoid jargon where possible. We appreciate a well-structured application that gets straight to the point, making it easy for us to see why you’d be a great fit.
Apply Through Our Website: Don’t forget to submit your application through our official website! It’s the best way to ensure we receive all your materials correctly and can process your application smoothly. Plus, it shows you’re serious about joining our team!
How to prepare for a job interview at CHEManager International
✨Know Your Research Inside Out
Make sure you’re well-versed in your own research and any relevant publications. Be prepared to discuss your findings, methodologies, and the implications of your work. This will show that you’re not just knowledgeable but also passionate about your field.
✨Understand the Role and Team Dynamics
Familiarise yourself with the specific sub-team you’re applying for, whether it’s misuse, alignment, or control. Knowing how your skills fit into their projects will help you articulate your value and demonstrate your enthusiasm for contributing to their mission.
✨Prepare for Technical Questions
Expect to face technical assessments related to large language models and adversarial ML research. Brush up on your coding skills, especially in frameworks like PyTorch, and be ready to discuss your approach to writing clean, documented research code.
✨Showcase Collaboration Skills
Highlight your ability to work in a team and share ideas. The role requires collaboration with various experts, so be ready to discuss past experiences where you successfully worked with others to achieve common goals. This will demonstrate that you can thrive in a dynamic environment.