Staff Threat Detection & Response Engineer London, UK
Staff Threat Detection & Response Engineer London, UK

Staff Threat Detection & Response Engineer London, UK

London Full-Time 65000 - 75000 £ / year (est.) No home office possible
Go Premium
AI Safety Institute

At a Glance

  • Tasks: Join us to design and implement cutting-edge security measures for AI systems.
  • Company: Be part of the AI Security Institute, a leader in AI risk management.
  • Benefits: Competitive salary, flexible working options, and opportunities for professional growth.
  • Why this job: Make a real impact on AI safety while working with top experts in the field.
  • Qualifications: Experience in detection engineering and a passion for AI security.
  • Other info: Dynamic team environment with excellent career advancement opportunities.

The predicted salary is between 65000 - 75000 £ per year.

Staff Threat Detection & Response Engineer

London, UK

About the AI Security Institute

The AI Security Institute is the world\’s largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.

We’re here because governments are critical for advanced AI going well, and AISI is uniquely positioned to mobilize them. With our resources and the UK government\’s unique agility and international influence, this is the best place to shape both AI development and government action.

About the Team:

Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.

What you might work on:

  • Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
  • Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)
  • Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale
  • Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal
  • Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them
  • Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer
  • Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
  • Contribute to open standards and open source, and share lessons with the broader community where appropriate

If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it

Role Summary

Build and maintain a modern, mission-aware detection engineering practice. You’ll own AISI’s threat model, define detections that reflect AISI-specific risks, and collaborate with DSIT’s SOC to extend coverage and context. You’ll focus on signal quality, not alert volume. You will extend coverage to AI/ML surfaces, instrumenting the model lifecycle and AI platforms so threats to model weights, data pipelines, GPU estates, and inference endpoints are visible, correlated, and actionable.

Responsibilities

  • Define and evolve AISI’s threat model, working with platform, research, and policy teams
  • Write detection rules, correlation logic, and hunt queries tailored to AISI\’s risk surface
  • Ensure relevant signals are logged, routed, and contextualised appropriately
  • Maintain detection playbooks, triage documentation, and escalation workflows
  • Act as a liaison between AISI engineering and DSIT\’s central SOC
  • Evaluate detection gaps and propose new signal sources or telemetry improvements
  • Extend the threat model to AI/ML: data/feature pipelines, training/finetuning, evaluations/release gates, registries, GPUs, and inference services
  • Develop detections for AI-specific risks: model weight custody/exfil (e.g., anomalous KMS decrypts, S3 access), registry tampering, dataset poisoning, training pipeline/image compromise, GPU abuse/cryptomining, and inference abuse (prompt injection/data exfil patterns, anomalous RAG connector access)
  • Define hunts and correlations that tie AI safety/evaluation signals (red-team hits, eval regressions, release gate overrides) to security events and insider/outsider activity
  • Author and rehearse AI-focused incident playbooks (weights leak, compromised model artefacts, inference abuse campaigns) with DSIT SOC

Profile requirements

  • Strong understanding of detection-as-code, MITRE ATT&CK, log pipelines, and cloud signal sources
  • Able to navigate outsourced SOC relationships while owning internal threat understanding
  • Familiarity with AWS CloudTrail, GuardDuty, KMS, S3 access logs, EKS/ECS audit, custom log ingestion; exposure to SageMaker/Bedrock or equivalent a plus
  • Curious, methodical, and proactive mindset
  • Practical grasp of AI/ML attack surfaces and telemetry needs (model registries, weights custody, GPU/accelerator fleets, inference gateways, vector stores)
  • Familiarity with AI threat frameworks (e.g., MITRE ATLAS, OWASP Top 10 for LLMs) desirable
  • Detection engineering mindset focused on signal quality and measurable coverage
  • Familiarity with MITRE ATT&CK and detection pipelines
  • Understanding of cloud-native telemetry and logging gaps
  • Ability to collaborate with outsourced SOCs
  • Instrumenting and detecting threats across AI/ML workloads (weights, datasets, training/inference) and correlating safety and security signals

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plusadditional benefitsas detailed on this page.

  • Level 3 – Total Package £65,000 – £75,000inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 – £39,280
  • Level 4 – Total Package £85,000 – £95,000inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 – £52,505
  • Level 5 – Total Package £105,000 – £115,000inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 – £59,195
  • Level 6 – Total Package £125,000 – £135,000inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 – £66,230
  • Level 7 – Total Package £145,000inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

There are a range of pension options available which can be found through the Civil Service website.

#J-18808-Ljbffr

Staff Threat Detection & Response Engineer London, UK employer: AI Safety Institute

The AI Security Institute is an exceptional employer, offering a unique opportunity to work at the forefront of AI safety research in London. With a strong focus on employee growth, a collaborative work culture, and direct engagement with government entities, AISI empowers its staff to make impactful contributions while enjoying competitive salaries and comprehensive benefits. Join us to be part of a mission-driven team that values innovation, security, and professional development in a dynamic environment.
AI Safety Institute

Contact Detail:

AI Safety Institute Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Staff Threat Detection & Response Engineer London, UK

✨Tip Number 1

Network like a pro! Get out there and connect with folks in the AI security space. Attend meetups, webinars, or even just grab a coffee with someone in the industry. You never know who might have the inside scoop on job openings!

✨Tip Number 2

Show off your skills! Create a portfolio that highlights your projects related to threat detection and response. Whether it's a GitHub repo or a personal website, make sure it’s easy for potential employers to see what you can do.

✨Tip Number 3

Prepare for interviews by practising common questions in the field of AI security. Think about how you would tackle specific threats or challenges mentioned in the job description. The more prepared you are, the more confident you'll feel!

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who take the initiative to engage directly with us.

We think you need these skills to ace Staff Threat Detection & Response Engineer London, UK

Threat Modelling
Detection Engineering
MITRE ATT&CK
Log Pipelines
Cloud Signal Sources
AWS CloudTrail
GuardDuty
KMS
S3 Access Logs
EKS/ECS Audit
AI/ML Attack Surfaces
Telemetry Needs
Signal Quality
Incident Playbooks
Collaboration with SOCs

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter to highlight your experience with detection engineering and AI/ML security. We want to see how your skills align with our mission at the AI Security Institute!

Showcase Your Technical Skills: Don’t hold back on detailing your technical expertise! Mention your familiarity with tools like AWS CloudTrail, GuardDuty, and your understanding of detection-as-code. This is your chance to shine!

Be Clear and Concise: When writing your application, keep it straightforward. Use clear language and avoid jargon where possible. We appreciate a well-structured application that gets straight to the point.

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role you’re interested in. Don’t miss out!

How to prepare for a job interview at AI Safety Institute

✨Know Your Threat Models

Before the interview, dive deep into understanding threat models, especially those relevant to AI and ML. Be prepared to discuss how you would define and evolve AISI's threat model, and think about specific risks that could impact their operations.

✨Showcase Your Detection Engineering Skills

Brush up on detection-as-code principles and be ready to talk about your experience with writing detection rules and correlation logic. Highlight any past projects where you’ve successfully implemented detection strategies, particularly in cloud environments.

✨Familiarise Yourself with Relevant Tools

Make sure you’re comfortable discussing tools like AWS CloudTrail, GuardDuty, and KMS. If you have experience with SageMaker or similar platforms, be ready to share how you've used them to enhance security and detection capabilities.

✨Prepare for Scenario-Based Questions

Expect scenario-based questions that test your problem-solving skills in real-world situations. Think of examples where you’ve had to assess detection gaps or propose new signal sources, and be ready to explain your thought process clearly.

Staff Threat Detection & Response Engineer London, UK
AI Safety Institute
Location: London
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>