At a Glance
- Tasks: Join a dynamic team to develop safeguards against AI misuse and societal risks.
- Company: AI Security Institute, leading the charge in AI risk research and mitigation.
- Benefits: Competitive salary, generous leave, remote work options, and professional development support.
- Why this job: Make a real impact on AI safety while collaborating with top experts in the field.
- Qualifications: Experience in applied ML, strong Python skills, and a passion for AI safety.
- Other info: Flexible working arrangements and opportunities for career growth in a supportive environment.
The predicted salary is between 65000 - 145000 Β£ per year.
About The AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We research the prevalence and severity of high-impact societal risks caused by frontier AI deployment, and develop mitigations to address these risks. Core research topics include the use of AI for assisting with criminal activities, preventing critical overreliance on insufficiently robust systems, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering.
This role will synthesise threat intelligence on how AI generated CSAM and NCII are developed, create scalable screening methodologies that platforms can realistically run, and publish best-practice protocols with NGOs to raise the floor across the ecosystem. You will report into a senior Research Scientist overseeing our team's misuse workstream.
Weβre flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below:
- At least 3+ years of relevant experience in applied ML, trust & safety tooling, content moderation, security engineering, or adjacent technical fields; we also welcome strong earlier-career applicants (2β3 years) with demonstrated impact in open-source technical work.
- Deep familiarity with open-weight image/video models (diffusion, LoRA), model hosting ecosystems (e.g., Hugging Face, GitHub), and the limitations of pre-deployment safeguards.
- Able to design automated, scalable evaluations and detection methods that generalise and avoid reliance on illegal content.
- Strong Python and ML stack (PyTorch/JAX), data engineering, and systems skills; experience building pipelines and tooling that run at platform scale.
- Excellent writing and communication for technical and policy audiences; commit to work from our London office in Whitehall for parts of the week, with flexibility for remote work.
- Familiarity with Online Safety Act requirements and platform trust & safety operations; open-source contributions (tools, libraries) and evidence of leading cross-sector technical projects.
We offer extensive operational support so you can focus on research and ship quickly, with opportunities for occasional remote work abroad. Benefits include:
- 5 development days per year, an annual L&D budget, and travel support for conferences and external collaborations.
- At least 25 days' annual leave, 8 public holidays, and extra team-wide breaks.
- Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
- 27% government-funded pension contribution on top of salary, work from home equipment and dental insurance.
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. The Civil Service embraces diversity and promotes equal opportunities.
Scientist I, Research & Development in London employer: AI Security Institute
Contact Detail:
AI Security Institute Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Scientist I, Research & Development in London
β¨Tip Number 1
Network like a pro! Reach out to people in the AI and security fields, especially those connected to the AI Security Institute. Attend relevant events or webinars, and donβt be shy about asking for informational interviews. You never know who might have the inside scoop on job openings!
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects related to AI, machine learning, or security engineering. This could include open-source contributions or any research you've done. Having tangible evidence of your work can really set you apart from other candidates.
β¨Tip Number 3
Prepare for interviews by diving deep into the latest trends in AI security. Brush up on your knowledge of the Online Safety Act and how it impacts AI development. Being well-versed in these topics will not only impress interviewers but also show that you're genuinely interested in the role.
β¨Tip Number 4
Apply through our website! Itβs the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who take the initiative to apply directly. Donβt forget to tailor your application to highlight your relevant experience and passion for AI safety!
We think you need these skills to ace Scientist I, Research & Development in London
Some tips for your application π«‘
Tailor Your Application: Make sure to customise your CV and cover letter for the Scientist I role. Highlight your relevant experience in applied ML, trust & safety tooling, and any projects that align with our mission at the AI Security Institute. We want to see how your skills can directly contribute to our work!
Showcase Your Technical Skills: Donβt hold back on showcasing your technical prowess! Mention your experience with Python, ML stacks like PyTorch or JAX, and any automated evaluation methods you've designed. Weβre keen to see how you can help us tackle the challenges of AI security.
Communicate Clearly: Your writing should be clear and engaging, especially since you'll be communicating with both technical and policy audiences. Use straightforward language to explain complex ideas, and donβt forget to proofread for clarity and professionalism!
Apply Through Our Website: We encourage you to apply directly through our website. Itβs the best way to ensure your application gets the attention it deserves. Plus, youβll find all the details about the role and our team there!
How to prepare for a job interview at AI Security Institute
β¨Know Your Stuff
Make sure you brush up on your knowledge of open-weight models and the specific risks associated with AI, especially around CSAM and NCII. Familiarise yourself with the latest research and methodologies in this area, as well as the Online Safety Act requirements. This will show that you're not just interested in the role but also passionate about the field.
β¨Showcase Your Skills
Prepare to discuss your experience with Python, ML stacks like PyTorch or JAX, and any relevant projects you've worked on. Be ready to explain how you've designed automated evaluations or detection methods in the past. Concrete examples will help demonstrate your technical prowess and problem-solving abilities.
β¨Collaboration is Key
Since this role involves working closely with engineers and domain experts, highlight your teamwork skills. Share examples of successful collaborations from your previous roles, especially those that required cross-agency coordination or engagement with external partners. This will illustrate your ability to work effectively in a team-oriented environment.
β¨Communicate Clearly
Strong writing and communication skills are essential for this position. Practice explaining complex technical concepts in simple terms, as you'll need to communicate with both technical and policy audiences. Consider preparing a brief presentation or summary of your past work to showcase your ability to convey information clearly and effectively.