At a Glance
- Tasks: Lead security initiatives for AI systems, ensuring robust protection against emerging threats.
- Company: AI-first software company revolutionising complex industries with innovative solutions.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Why this job: Join a cutting-edge team and make a real impact on AI security.
- Qualifications: Experience in offensive security, threat modelling, and cloud security.
- Other info: Dynamic role with autonomy and the chance to shape security frameworks.
The predicted salary is between 54000 - 84000 £ per year.
My client is an AI-first software company building foundational AI solutions for complex industries. As they scale, they are committed to becoming AI-native across every department, embedding automation, augmentation, and intelligence into the core of how they operate.
As Lead Security Engineer AI and Cloud, you will take end-to-end ownership of the security posture of the platform and associated infrastructure. This spans multi-model inference systems, real-time data ingestion, edge deployments, and hybrid cloud environments.
Job Responsibilities- Design and lead red team campaigns against model logic, inference systems, and edge deployments.
- Build secure-by-default infrastructure for model deployment and feedback loops.
- Collaborate with platform, ML, and infrastructure engineers to embed security throughout.
- Represent our security posture in client conversations and enterprise reviews.
- Stay current with emerging threats in adversarial ML, industrial systems, and LLM safety.
The overall vision as a leader of security engineering is to build a best in class Agentic security systems from SOC to AI pipelines with minimal human interaction. This is a hands-on engineering role with a strong offensive security focus. You will think like an attacker, build like a systems architect, and validate everything through adversarial testing.
You will work across the AI stack, threat model novel attack surfaces, simulate adversaries, and embed controls that protect safety, trust, and uptime, all without slowing down.
This role is best suited for someone who:
- Has built or broken AI systems in the wild and knows where they fail.
- Has experience across red team tactics, cloud security, and AI/ML pipeline security.
- Enjoys threat modelling and then actually testing the threat.
- Is fluent in offensive techniques but just as comfortable writing detection logic, securing cloud deployments, and hardening systems.
- Knows how to navigate ambiguity and build security frameworks where none exist.
- Can think clearly about risk, consequence, and exposure not just vulnerabilities.
- Is motivated by impact, autonomy, and hard problems, not by headcount or prestige.
My client cares less about how long you’ve been doing this and more about how deep you go. This role is designed for someone who wants to own the full security lifecycle of an adversarial surface and prove it works. You will be responsible for finding and plugging the gaps in defences before someone exploits them.
Job Requirements- Threat Modelling and Continuous Exposure Management: Build and maintain a real-time threat model across model, infrastructure, and data layers. Prioritise exposures by exploitability × physical consequence, not just CVSS. Operate a living CTEM (Continuous Threat Exposure Management) cycle. Report exposure posture to leadership with confidence and clarity.
- Offensive Testing and Red Team Simulation: Simulate adversaries targeting foundation models (evasion, poisoning, trust boundary abuse) and edge deployments (signed binaries, inference manipulation). Collaborate with engineers to build, break, and then harden those systems.
- Cloud and Code-Centric Security Architecture: Work across AWS and Azure environments, securing build, deployment, and runtime. Implement verification for edge systems (secure boot, artifact integrity, telemetry hygiene). Operate close to code understand pipelines, dependencies, APIs, and risks at a system level.
- Safety-Critical AI Design and Monitoring: Create trust boundaries between AI output and operator action. Implement uncertainty scoring, fallback logic, and human-in-the-loop systems. Build instrumentation for drift detection, anomalous output, and unsafe recommendations.
Principal Security Engineer employer: CyberApt Recruitment
Contact Detail:
CyberApt Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Principal Security Engineer
✨Tip Number 1
Network like a pro! Attend industry meetups, conferences, or online webinars related to AI and security. Engaging with professionals in the field can lead to valuable connections and potential job opportunities.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving AI systems and security. This will give you an edge during interviews and demonstrate your hands-on experience.
✨Tip Number 3
Prepare for technical interviews by practising common security scenarios and red team tactics. We recommend simulating real-world attacks and defences to showcase your problem-solving skills and thought process.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets noticed. Plus, we love seeing candidates who are proactive about their job search.
We think you need these skills to ace Principal Security Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the specific skills and experiences that align with the Principal Security Engineer role. Highlight your experience in AI systems, red team tactics, and cloud security to show us you’re the right fit.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you’re passionate about security engineering and how your background makes you a great candidate. Share specific examples of your work in threat modelling or offensive testing to grab our attention.
Showcase Your Technical Skills: Don’t shy away from getting technical! We want to see your expertise in areas like adversarial ML and cloud security. Use clear examples to demonstrate your hands-on experience and problem-solving abilities.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss any important updates about the hiring process!
How to prepare for a job interview at CyberApt Recruitment
✨Know Your Stuff
Make sure you’re well-versed in the latest trends and challenges in AI security. Brush up on adversarial ML, threat modelling, and red team tactics. Being able to discuss these topics confidently will show that you’re not just familiar with the theory but also understand the practical implications.
✨Showcase Your Experience
Prepare specific examples from your past work where you’ve successfully built or broken AI systems. Highlight your hands-on experience with offensive security techniques and how you’ve navigated ambiguity in previous roles. This will demonstrate your capability to take ownership of the security lifecycle.
✨Collaborate and Communicate
Since this role involves working closely with engineers across various teams, practice articulating your thoughts clearly. Be ready to discuss how you would collaborate with platform and infrastructure engineers to embed security into their processes. Good communication can set you apart!
✨Think Like an Attacker
During the interview, adopt an attacker’s mindset. Discuss potential vulnerabilities and how you would approach testing them. This not only shows your technical skills but also your strategic thinking in building secure systems. Remember, they want someone who can think critically about risk and exposure.