At a Glance
- Tasks: Join a dynamic team to develop safeguards against AI misuse and enhance online safety.
- Company: Be part of the world's leading AI Security Institute, shaping government action and tech development.
- Benefits: Enjoy competitive salary, generous leave, remote work options, and professional development opportunities.
- Why this job: Make a real impact on society by tackling critical AI risks and protecting vulnerable communities.
- Qualifications: Experience in applied ML, strong Python skills, and a passion for AI safety.
- Other info: Flexible working arrangements and a vibrant office environment in central London.
The predicted salary is between 65000 - 145000 £ per year.
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We research the prevalence and severity of high-impact societal risks caused by frontier AI deployment, and develop mitigations to address these risks. Core research topics include the use of AI for assisting with criminal activities, preventing critical overreliance on insufficiently robust systems, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering.
We are interested in both immediate and medium-term risks. One emerging risk area we are concerned with is the use of open weight models to drive risks like child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) generation. In this role, you will join a strongly collaborative technical research team to help design and develop technical safeguards for open weight models that will reduce the risks of CSAM, NCII, and other risks. This is a research scientist position focused on developing technical safeguards against tampering with open weight models.
This work belongs inside UK government because effective mitigation requires cross-agency coordination (Home Office, DSIT, Ofcom), engagement with regulated platforms under the Online Safety Act, and credible evidence to inform policy trade-offs across innovation, competition, and child protection. This role will synthesise threat intelligence on how AI generated CSAM and NCII are developed, create scalable screening methodologies that platforms can realistically run, and publish best-practice protocols with NGOs to raise the floor across the ecosystem.
You will work closely with engineers and domain experts across AISI, as well as external research collaborators at Home Office, Internet Watch Foundation, and Ofcom. Researchers on this team have substantial freedom to shape independent research agendas, lead collaborations, and initiate projects that push the frontier of what evaluations can reveal.
Example Projects:
- Publish a Problem Book framing the technical challenges and research directions for preventing CSAM/NCII misuse across model and hosting layers.
- Design and pilot scalable, automated screening methodologies platforms can run pre-publication on uploads (topic-general prototypes that avoid exposure to illegal content).
You will report into a senior Research Scientist overseeing our team's misuse workstream. We are flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below. Depending on experience, we will consider candidates at either the RS or Senior RS level.
Qualifications:
- At least 3+ years of relevant experience in applied ML, trust & safety tooling, content moderation, security engineering, or adjacent technical fields; we also welcome strong earlier-career applicants (2–3 years) with demonstrated impact in open-source technical work.
- Deep familiarity with open-weight image/video models (diffusion, LoRA), model hosting ecosystems (e.g., Hugging Face, GitHub), and the limitations of pre-deployment safeguards.
- Able to design automated, scalable evaluations and detection methods that generalise and avoid reliance on illegal content.
- Strong Python and ML stack (PyTorch/JAX), data engineering, and systems skills; experience building pipelines and tooling that run at platform scale.
- Excellent writing and communication for technical and policy audiences; commit to work from our London office in Whitehall for parts of the week, with flexibility for remote work.
- Familiarity with Online Safety Act requirements and platform trust & safety operations; open-source contributions (tools, libraries) and evidence of leading cross-sector technical projects.
Benefits:
- 5 development days per year, an annual L&D budget, and travel support for conferences and external collaborations.
- Freedom to pursue research bets without product pressure.
- Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford, or Bristol.
- Hybrid working with opportunities for occasional remote work abroad.
- At least 25 days' annual leave, 8 public holidays, and extra team-wide breaks.
- Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
- 27% government-funded pension contribution on top of salary, work from home equipment and dental insurance.
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. The Civil Service embraces diversity and promotes equal opportunities.
Scientist Research and Development in London employer: AI Security Institute
Contact Detail:
AI Security Institute Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Scientist Research and Development in London
✨Tip Number 1
Network like a pro! Reach out to folks in the AI and security fields on LinkedIn or at industry events. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Show off your skills! If you’ve got projects or contributions to open-source work, make sure to highlight them in conversations. It’s all about demonstrating your impact and expertise.
✨Tip Number 3
Prepare for interviews by diving deep into the latest trends in AI security. Being able to discuss current challenges and solutions will impress interviewers and show you’re genuinely interested.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets the attention it deserves. Plus, we love seeing candidates who are proactive!
We think you need these skills to ace Scientist Research and Development in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter for the Scientist Research and Development role. Highlight your relevant experience in applied ML, trust & safety tooling, and any specific projects that align with our mission at the AI Security Institute.
Showcase Your Technical Skills: We want to see your technical prowess! Be sure to mention your experience with Python, ML stacks like PyTorch or JAX, and any automated evaluation methods you've designed. This is your chance to shine!
Communicate Clearly: Your writing should be clear and engaging. Remember, we’re looking for excellent communication skills for both technical and policy audiences. Use this opportunity to demonstrate how well you can convey complex ideas simply.
Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to receive your details and ensures you’re considered for this exciting opportunity at the AI Security Institute.
How to prepare for a job interview at AI Security Institute
✨Know Your Stuff
Make sure you brush up on your knowledge of open-weight models and the specific risks associated with AI, especially around CSAM and NCII. Familiarise yourself with the latest research and methodologies in this area, as well as the Online Safety Act requirements. This will show that you're not just interested in the role but are also informed about the challenges the team is tackling.
✨Showcase Your Skills
Prepare to discuss your experience with Python, ML stacks like PyTorch or JAX, and any relevant projects you've worked on. Be ready to explain how you've built scalable detection systems or automated evaluations in the past. Concrete examples will help demonstrate your technical prowess and how it aligns with the team's goals.
✨Collaboration is Key
Since this role involves working closely with engineers and domain experts, highlight your collaborative experiences. Share examples of how you've successfully led cross-sector projects or worked within a team to achieve common goals. This will illustrate your ability to thrive in a collaborative environment, which is crucial for this position.
✨Communicate Clearly
Strong writing and communication skills are essential for this role. Practice explaining complex technical concepts in simple terms, as you'll need to communicate with both technical and policy audiences. Consider preparing a brief presentation or summary of your past work to showcase your ability to convey information effectively.