At a Glance
- Tasks: Lead offensive security operations against AI systems and conduct advanced penetration testing.
- Company: Join Open Code Mission, a leader in AI red teaming and SOC operations.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Why this job: Make a real impact in securing cutting-edge AI technologies and advance your career.
- Qualifications: 5+ years in penetration testing, expertise in LLMs, and strong understanding of AI security frameworks.
- Other info: Collaborative environment with opportunities to shape the future of AI security.
The predicted salary is between 36000 - 60000 £ per year.
Lead offensive security operations against AI/ML systems, conduct adversarial red teaming of LLMs, and strengthen enterprise defences through continuous penetration testing and advanced SOC operations.
What We're Looking For
- Minimum 5+ years of professional experience in penetration testing, red teaming, or SOC operations
- Demonstrable expertise in LLMs, generative AI systems, or ML security research
- Hands-on familiarity with adversarial ML frameworks (e.g., CleverHans, ART, custom exploits)
- Strong understanding of MITRE ATT&CK, NIST AI Risk Management Framework, and ISO/IEC 23894 (AI risk standards)
- Experience running end-to-end red team operations in enterprise environments
- Ability to produce investor-grade reporting, executive briefings, and CISO-aligned dashboards
What You'll Do
- Operate within Open Code Mission's advanced SOC framework, focused on AI/ML attack surfaces
- Develop, tune, and operate AI-aware SIEM/SOAR pipelines integrated with LumenACT
- Monitor, detect, and respond to anomalous AI behaviours in real time across client environments
- Conduct full-spectrum penetration tests against enterprise systems with a focus on AI-integrated applications
- Design and execute adversarial campaigns against LLMs, LLM wrappers, and API-based AI deployments
- Exploit and document advanced attack vectors including:
- Model memory leaks (episodic & persistent)
- Shadow mode activation & phantom functionality exploits
AI Red Team Specialist employer: Ocmxai
Contact Detail:
Ocmxai Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AI Red Team Specialist
✨Tip Number 1
Network like a pro! Get out there and connect with folks in the AI and cybersecurity space. Attend meetups, webinars, or conferences where you can chat with industry leaders and potential colleagues. You never know who might have the inside scoop on job openings!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your red teaming projects, especially those involving LLMs and adversarial ML frameworks. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Don’t just apply blindly! Tailor your approach for each application. Research the company’s current challenges in AI security and mention how your experience aligns with their needs. This shows you’re genuinely interested and not just sending out generic applications.
✨Tip Number 4
Use our website to apply! We’ve got a streamlined process that makes it easy for you to showcase your skills and experience. Plus, applying directly through us means you’ll be on our radar right away. Let’s get you on board!
We think you need these skills to ace AI Red Team Specialist
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience in penetration testing and red teaming, especially with AI/ML systems. We want to see how your skills align with the job description, so don’t be shy about showcasing relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about AI security and how your background makes you a perfect fit for our team. Let us know what excites you about the role and our mission.
Showcase Your Technical Skills: Don’t forget to mention your hands-on experience with adversarial ML frameworks and any relevant tools you’ve used. We love seeing practical examples of your work, so feel free to include links to projects or publications if you have them!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our mission at StudySmarter!
How to prepare for a job interview at Ocmxai
✨Know Your Stuff
Make sure you brush up on your knowledge of LLMs and generative AI systems. Be ready to discuss specific adversarial ML frameworks like CleverHans and ART, as well as your hands-on experience with penetration testing and red teaming. The more you can demonstrate your expertise, the better!
✨Understand the Frameworks
Familiarise yourself with the MITRE ATT&CK framework and the NIST AI Risk Management Framework. Being able to talk about how these standards apply to AI security will show that you’re not just technically skilled but also understand the broader context of your work.
✨Prepare for Real-World Scenarios
Expect to be asked about your experience running end-to-end red team operations. Prepare examples of past projects where you’ve successfully exploited advanced attack vectors or led purple team exercises. Concrete examples will help you stand out!
✨Communicate Clearly
Since you'll need to produce investor-grade reporting and executive briefings, practice explaining complex technical concepts in simple terms. Being able to communicate effectively with both technical and non-technical stakeholders is key to success in this role.