Research Scientist, Open Source Technical Safeguards
Research Scientist, Open Source Technical Safeguards

Research Scientist, Open Source Technical Safeguards

Full-Time 65000 - 145000 £ / year (est.) No home office possible
Go Premium
A

At a Glance

  • Tasks: Join a dynamic team to develop safeguards against AI misuse and protect society.
  • Company: AI Security Institute, leading the charge in AI safety and governance.
  • Benefits: Competitive salary, generous leave, remote work options, and professional development opportunities.
  • Why this job: Make a real impact on AI safety while collaborating with top experts and government officials.
  • Qualifications: Experience in applied ML, security engineering, and strong Python skills required.
  • Other info: Flexible working arrangements and a supportive, mission-driven environment.

The predicted salary is between 65000 - 145000 £ per year.

About the AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.

Societal Resilience is a multidisciplinary team that studies how advanced AI models can impact people and society. We research the prevalence and severity of high-impact societal risks caused by frontier AI deployment, and develop mitigations to address these risks. Core research topics include the use of AI for assisting with criminal activities, preventing critical overreliance on insufficiently robust systems, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering.

One emerging risk area we are concerned with is the use of open-weight models to drive risks like child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) generation. AISI has previously published research on methods for making open-weight models more robust against malicious tampering. In this role, you’ll join a strongly collaborative technical research team to help design and develop technical safeguards for open-weight models that will reduce the risks of CSAM, NCII, and other risks.

This is a research scientist position focused on developing technical safeguards against tampering with open-weight models. The role will focus on mitigating AI-generated CSAM and NCII by targeting the real-world supply chain driving harm: open-weight models, adaptation artifacts (LoRAs, guides), and downstream distribution infrastructure (hosting platforms, app stores, operating systems).

Our approach prioritises downstream mitigations and actors beyond frontier model developers. This role will build technical tools, protocols, and evidence that platforms and OS/app ecosystems can adopt. This work belongs inside UK government because effective mitigation requires cross-agency coordination (Home Office, DSIT, Ofcom), engagement with regulated platforms under the Online Safety Act, and credible evidence to inform policy trade-offs across innovation, competition, and child protection.

This role will synthesize threat intelligence on how AI-generated CSAM and NCII are developed, create scalable screening methodologies that platforms can realistically run, and publish best-practice protocols with NGOs to raise the floor across the ecosystem.

You’ll work closely with engineers and domain experts across AISI, as well as external research collaborators at Home Office, Internet Watch Foundation, and Ofcom. Researchers on this team have substantial freedom to shape independent research agendas, lead collaborations, and initiate projects that push the frontier of what evaluations can reveal.

Example Projects

  • Publish a Problem Book framing the technical challenges and research directions for preventing CSAM/NCII misuse across model and hosting layers.
  • Develop threat models for how AI generated CSAM and NCII are created and shared.
  • Design and pilot scalable, automated screening methodologies platforms can run pre-publication on uploads (topic-general prototypes that avoid exposure to illegal content).
  • Develop approaches for identifying and tracking known or novel CSAM LoRAs to enable platform blocking at upload.
  • Co-develop best-practice protocols with NGOs (e.g., Thorn/IWF) for hosting, app store, and OS enforcement.

This is an individual contributor role with no line management responsibilities. You will report into a senior Research Scientist overseeing our team’s misuse workstream.

Impact

Your work will raise safety standards across hosting and distribution layers, reduce the availability of CSAM/NCII-generating artifacts (e.g., LoRAs) on major platforms, inform industry protocols and possibly standards, and provide actionable evidence for government decisions.

Role Requirements

We’re flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below. Depending on experience, we will consider candidates at either the RS or Senior RS level.

  • At least 3+ years of relevant experience in applied ML, trust & safety tooling, content moderation, security engineering, or adjacent technical fields; we also welcome strong earlier-career applicants (2–3 years) with demonstrated impact in open-source technical work.
  • Deep familiarity with open-weight image/video models (diffusion, LoRA), model hosting ecosystems (e.g., Hugging Face, GitHub), and the limitations of pre-deployment safeguards.
  • Strong methodological rigor and creativity; able to design automated, scalable evaluations and detection methods that generalise and avoid reliance on illegal content.
  • Strong Python and ML stack (PyTorch/JAX), data engineering, and systems skills; experience building pipelines and tooling that run at platform scale.
  • Knowledge of fingerprinting and detection approaches (e.g., perceptual hashing, embedding-based similarity, behavioural signatures), and their privacy and robustness trade-offs.
  • Excellent writing and communication for technical and policy audiences; ability to translate evidence into practical governance guidance.
  • High agency, ethical judgment, and safe-working practices for sensitive topics.
  • Commit to work from our London office in Whitehall for parts of the week, with flexibility for remote work.
  • We’re looking for full-time commitment but are open to part-time arrangements.

Preferred

  • Experience collaborating with hosting platforms, app stores, OS vendors, or regulators (e.g., Ofcom) on safety-by-design initiatives.
  • Familiarity with Online Safety Act requirements and platform trust & safety operations; prior work with NGOs such as IWF, Thorn, or STOPNCII.org.
  • Expertise in diffusion models and adaptation techniques (LoRA), model evaluation, and secure tooling for sensitive domains.
  • Experience with privacy-preserving computation, metadata-poor detection, and standardization efforts (RFCs, protocols).
  • Open-source contributions (tools, libraries) and evidence of leading cross-sector technical projects.

Example backgrounds

  • Senior trust & safety engineer who built automated content integrity pipelines for a large platform; strong OS/Strack record; experience with model hosting ecosystems.
  • Applied ML researcher with a PhD/postdoc in computer vision or ML safety; hands-on with diffusion/LoRA; led evaluations and published tooling used by industry.
  • Security/data engineer with 3+ years building scalable detection systems; experience in fingerprinting, hashing, and privacy-preserving methods; collaborated with regulators/NGOs.

What we offer

  • Impact you couldn’t have anywhere else.
  • Incredibly talented, mission-driven and supportive colleagues.
  • Direct influence on how frontier AI is governed and deployed globally.
  • Work with the Prime Minister’s AI Advisor and leading AI companies.
  • Opportunity to shape the first & best-resourced public-interest research team focused on AI security.
  • Resources & access: Pre-release access to multiple frontier models and ample compute; extensive operational support so you can focus on research and ship quickly; work with experts across national security, policy, AI research, and adjacent sciences.
  • If you’re talented and driven, you’ll own important problems early.
  • 5 development days per year, an annual L&D budget, and travel support for conferences and external collaborations.
  • Freedom to pursue research bets without product pressure.
  • Opportunities to publish and collaborate externally.

Life & family

  • Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford, or Bristol.
  • Hybrid working with opportunities for occasional remote work abroad.
  • At least 25 days’ annual leave, 8 public holidays, and extra team-wide breaks.
  • Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
  • Plus: 27% government-funded pension contribution on top of salary, work from home equipment and dental insurance.

Salary ranges

  • Level 3 – Total Package £65,000 – £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 – £39,280.
  • Level 4 – Total Package £85,000 – £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 – £52,505.
  • Level 5 – Total Package £105,000 – £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 – £59,195.
  • Level 6 – Total Package £125,000 – £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 – £66,230.
  • Level 7 – Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230.

Interview process

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The interview process may vary candidate to candidate but you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior team here at AISI.

Candidates should expect to go through some or all of the following stages once an application has been submitted:

  • Initial interview.
  • Technical take-home test.
  • Second interview and review of take-home test.
  • Third interview.
  • Final interview with members of the senior team.

Additional information

The Civil Service Code sets out the standards of behaviour expected of civil servants. The Civil Service embraces diversity and promotes equal opportunities. We run a Disability Confident Scheme for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Research Scientist, Open Source Technical Safeguards employer: AI Security Institute

The AI Security Institute is an exceptional employer, offering a unique opportunity to work at the forefront of AI safety and governance in London. With a mission-driven culture, employees benefit from direct influence on global AI policies, extensive resources for research, and a supportive environment that fosters professional growth through collaboration with leading experts and organisations. The institute also provides generous benefits, including flexible working arrangements, substantial annual leave, and a strong commitment to employee well-being.
A

Contact Detail:

AI Security Institute Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Research Scientist, Open Source Technical Safeguards

✨Network Like a Pro

Get out there and connect with people in the AI and tech safety space! Attend meetups, conferences, or even online webinars. The more you engage with others, the better your chances of landing that dream role.

✨Show Off Your Skills

Don’t just talk about your experience; demonstrate it! Create a portfolio showcasing your projects, especially those related to open-weight models or AI safety. This will give potential employers a clear view of what you can bring to the table.

✨Ace the Interview

Prepare for interviews by brushing up on both technical skills and current trends in AI safety. Practice common interview questions and be ready to discuss how your background aligns with the role at AISI. Confidence is key!

✨Apply Through Our Website

Make sure to apply directly through our website! It’s the best way to ensure your application gets seen by the right people. Plus, you’ll find all the latest opportunities and updates there.

We think you need these skills to ace Research Scientist, Open Source Technical Safeguards

Applied Machine Learning
Trust & Safety Tooling
Content Moderation
Security Engineering
Open-Weight Image/Video Models
Model Hosting Ecosystems
Automated Evaluation Design
Python Programming
ML Frameworks (PyTorch/JAX)
Data Engineering
Fingerprinting and Detection Approaches
Technical Writing and Communication
Ethical Judgment
Collaboration with NGOs
Knowledge of Online Safety Act

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter for the Research Scientist role. Highlight your relevant experience in applied ML and technical safeguards, and show us how your skills align with our mission at the AI Security Institute.

Showcase Your Technical Skills: We want to see your expertise in Python, ML stacks, and open-weight models. Include specific projects or contributions that demonstrate your ability to build scalable detection systems and your familiarity with model hosting ecosystems.

Communicate Clearly: Your writing should be clear and concise, especially when explaining complex technical concepts. Remember, we’re looking for someone who can translate evidence into practical governance guidance, so make sure your application reflects that ability.

Apply Through Our Website: Don’t forget to submit your application through our official website! It’s the best way for us to receive your details and ensure you’re considered for this exciting opportunity at the AI Security Institute.

How to prepare for a job interview at AI Security Institute

✨Know Your Stuff

Make sure you’re well-versed in the latest developments in AI safety and open-weight models. Brush up on your knowledge of CSAM and NCII risks, as well as the technical safeguards that can be implemented. This will show your passion and expertise during the interview.

✨Showcase Your Experience

Prepare to discuss your previous work in applied ML, trust & safety tooling, or content moderation. Be ready to share specific examples of projects you've worked on, especially those that involved collaboration with platforms or NGOs. This will help demonstrate your hands-on experience and problem-solving skills.

✨Ask Smart Questions

Come prepared with insightful questions about the role and the team’s current projects. This not only shows your interest but also helps you gauge if the position aligns with your career goals. Think about how you can contribute to their mission and what challenges they face.

✨Communicate Clearly

Since this role involves translating complex technical concepts into practical governance guidance, practice explaining your ideas clearly and concisely. Use examples from your past work to illustrate your points, and ensure you can communicate effectively with both technical and non-technical audiences.

Research Scientist, Open Source Technical Safeguards
AI Security Institute
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

A
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>