At a Glance
- Tasks: Join a team tackling AI risks and develop safeguards against misuse of technology.
- Company: Be part of the AI Security Institute, leading in AI risk research.
- Benefits: Enjoy competitive salary, generous leave, and professional development opportunities.
- Why this job: Make a real impact on AI safety while collaborating with top experts.
- Qualifications: Experience in ML, security engineering, or related fields; strong Python skills required.
- Other info: Flexible working arrangements and a vibrant London office await you.
The predicted salary is between 65000 - 145000 £ per year.
About The AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We research the prevalence and severity of high-impact societal risks caused by frontier AI deployment and develop mitigations to address these risks. Core research topics include the use of AI for assisting with criminal activities, preventing critical overreliance on insufficiently robust systems, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium-term risks.
One emerging risk area we are concerned with is the use of open weight models to drive risks like child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) generation. AISI has previously published research on methods for making open weight models more robust against malicious tampering. In this role, you will join a strongly collaborative technical research team to help design and develop technical safeguards for open weight models that will reduce the risks of CSAM, NCII, and other risks. We do not expect this role to handle this kind of content directly.
This is a research scientist position focused on developing technical safeguards against tampering with open weight models. This work belongs inside UK government because effective mitigation requires cross-agency coordination (Home Office, DSIT, Ofcom), engagement with regulated platforms under the Online Safety Act, and credible evidence to inform policy trade-offs across innovation, competition, and child protection.
This role will synthesise threat intelligence on how AI-generated CSAM and NCII are developed, create scalable screening methodologies that platforms can realistically run, and publish best-practice protocols with NGOs to raise the floor across the ecosystem. You will work closely with engineers and domain experts across AISI, as well as external research collaborators at Home Office, Internet Watch Foundation, and Ofcom. Researchers on this team have substantial freedom to shape independent research agendas, lead collaborations, and initiate projects that push the frontier of what evaluations can reveal.
Example Projects
- Publish a Problem Book framing the technical challenges and research directions for preventing CSAM/NCII misuse across model and hosting layers.
- Design and pilot scalable, automated screening methodologies platforms can run pre-publication on uploads (topic-general prototypes that avoid exposure to illegal content).
You will report into a senior Research Scientist overseeing our team's misuse workstream. We're flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below. Depending on experience, we will consider candidates at either the RS or Senior RS level.
At least 3+ years of relevant experience in applied ML, trust & safety tooling, content moderation, security engineering, or adjacent technical fields; we also welcome strong earlier-career applicants (2–3 years) with demonstrated impact in open-source technical work.
Deep familiarity with open-weight image/video models (diffusion, LoRA), model hosting ecosystems (e.g., Hugging Face, GitHub), and the limitations of pre-deployment safeguards.
Able to design automated, scalable evaluations and detection methods that generalise and avoid reliance on illegal content.
Strong Python and ML stack (PyTorch/JAX), data engineering, and systems skills; experience building pipelines and tooling that run at platform scale.
Excellent writing and communication for technical and policy audiences; commit to work from our London office in Whitehall for parts of the week, with flexibility for remote work. We're looking for full-time commitment but are open to part-time arrangements.
Familiarity with Online Safety Act requirements and platform trust & safety operations; open-source contributions (tools, libraries) and evidence of leading cross-sector technical projects.
Opportunity to shape the first & best-resourced public-interest research team focused on AI security.
Extensive operational support so you can focus on research and ship quickly.
Work with experts across national security, policy, AI research, and adjacent sciences.
5 development days per year, an annual L&D budget, and travel support for conferences and external collaborations.
Freedom to pursue research bets without product pressure.
Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford, or Bristol.
Hybrid working with opportunities for occasional remote work abroad.
At least 25 days' annual leave, 8 public holidays, and extra team-wide breaks.
Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
Plus: 27% government-funded pension contribution on top of salary, work from home equipment and dental insurance.
Most offers land between £65,000 and £145,000 (base plus technical allowance), with 27% employer pension and other benefits on top.
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. The Civil Service embraces diversity and promotes equal opportunities.
Diversity and Inclusion: The Civil Service is committed to attract, retain and invest in talent wherever it is found.
Research Scientist, Security in London employer: AI Security Institute
Contact Detail:
AI Security Institute Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Research Scientist, Security in London
✨Tip Number 1
Network like a pro! Reach out to folks in the AI security space, especially those connected to the AI Security Institute. LinkedIn is your best mate here—send personalised messages and ask for informational chats. You never know who might help you land that interview!
✨Tip Number 2
Show off your skills! If you've got any projects or contributions related to open-weight models or AI safety, make sure to highlight them. Create a portfolio or GitHub repo showcasing your work. This will give you an edge and demonstrate your hands-on experience.
✨Tip Number 3
Prepare for the interview like it’s the final exam! Research the AI Security Institute's recent projects and publications. Be ready to discuss how your background aligns with their mission and how you can contribute to their goals. Confidence is key!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, keep an eye on our careers page for updates and new opportunities. We’re always looking for passionate individuals to join our team!
We think you need these skills to ace Research Scientist, Security in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your relevant experience in applied ML, security engineering, or content moderation. We want to see how your skills align with the specific needs of the Research Scientist role at AISI.
Showcase Your Technical Skills: Don’t hold back on showcasing your Python and ML stack expertise! Mention any projects where you've built scalable detection systems or automated evaluations. This is your chance to impress us with your technical prowess.
Communicate Clearly: Since this role involves working with both technical and policy audiences, make sure your writing is clear and concise. We appreciate well-structured applications that convey complex ideas simply—show us you can bridge that gap!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it gives you a chance to explore more about what we do at AISI!
How to prepare for a job interview at AI Security Institute
✨Know Your Stuff
Make sure you brush up on the latest trends and challenges in AI security, especially around open-weight models. Familiarise yourself with the specific risks mentioned in the job description, like CSAM and NCII. This will show that you're not just interested in the role but also understand the critical issues at play.
✨Showcase Your Skills
Prepare to discuss your experience with Python, ML stacks, and any relevant projects you've worked on. Be ready to explain how you've designed scalable evaluations or detection methods in the past. Concrete examples will help demonstrate your technical prowess and problem-solving abilities.
✨Communicate Clearly
Since you'll be working with both technical and policy audiences, practice explaining complex concepts in simple terms. Think about how you can convey your ideas effectively, whether it's through a presentation or a casual chat. Good communication skills are key in this collaborative environment.
✨Ask Insightful Questions
Prepare thoughtful questions about the team's current projects, the collaboration with external partners, or the impact of your work on policy. This shows your genuine interest in the role and helps you gauge if the company culture aligns with your values.