At a Glance
- Tasks: Lead security for AI-driven autonomous agents in industrial automation.
- Company: Join Phaidra, a remote-first company revolutionising industrial automation with AI.
- Benefits: Competitive salary, unlimited paid time off, and professional development opportunities.
- Why this job: Make a real impact on critical infrastructure while working with cutting-edge technology.
- Qualifications: 5+ years in product security and experience with AI systems.
- Other info: Collaborative culture with a focus on transparency and operational excellence.
The predicted salary is between 89600 - 134400 ÂŁ per year.
Phaidra is building the future of industrial automation. The world today is filled with static, monolithic infrastructure. Factories, power plants, buildings, etc. operate the same they’ve operated for decades — because the controls programming is hard-coded. Thousands of lines of rules and heuristics that define how the machines interact with each other. The result of all this hard-coding is that facilities are frozen in time, unable to adapt to their environment while their performance slowly degrades. Phaidra creates AI-powered control systems for the industrial sector, enabling industrial facilities to automatically learn and improve over time. We use reinforcement learning algorithms to provide this intelligence, converting raw sensor data into high-value actions and decisions. We focus on industrial applications, which tend to be well-sensorized with measurable KPIs — perfect for reinforcement learning. We enable domain experts (our users) to configure the AI control systems (i.e. agents) without writing code. They define what they want their AI agents to do, and we do it for them.
Our team has a track record of applying AI to some of the toughest problems. From achieving superhuman performance with DeepMind's AlphaGo, to reducing the energy required to cool Google’s Data Centers by 40%, we deeply understand AI and how to apply it in production for massive impact. Phaidra's ability to achieve its mission is determined by our ability to work together — as defined by our core values: Transparency, Collaboration, Operational Excellence, Ownership, and Empathy. We seek individuals who embody these values, as they are instrumental in ensuring our team consistently delivers excellence and fosters an engaging and supportive culture.
Phaidra is based in the USA, but we are 100% remote with no physical office. We hire employees internationally with the help of our partner, OysterHR. Our team is currently located throughout the USA, Canada, UK, Italy, Sweden, Spain, Portugal, the Netherlands, Singapore, Australia, and India.
The Opportunity: At Phaidra, security is the bedrock of trust for our customers operating the world’s most critical infrastructure. We are looking for a Senior Product Security Engineer to partner directly with our Agentic AI department. This team is at the forefront of our mission, building autonomous agents responsible for optimizing the operational fabric of AI factories. These agents don’t just chat; they act. They make real-time decisions to optimize power usage, cooling efficiency, and hardware health, creating a more stable and efficient environment for massive-scale compute. This is a high-stakes environment where the integrity and security of AI-driven decisions are paramount. You will tackle the unique security challenges of deploying autonomous agents that interact directly with physical control systems. Security failures here don’t just mean data leaks; they could mean operational downtime or physical degradation of critical hardware. We need a security expert who thrives working hand‑in‑hand with AI researchers and engineers. Your role is to embed security into the DNA of our Agentic platform, ensuring that as our agents learn and explore, they do so within unbreakable safety boundaries.
Responsibilities:
- Champion Secure Agentic AI Development: Drive the adoption of Phaidra's Secure AI/ML Development Lifecycle (SAIDL) within the Agentic AI team. Adapt security practices to fit the iterative and experimental nature of Reinforcement Learning and agent development.
- Agentic Threat Modeling: Partner with researchers to model threats specific to autonomous agents. Beyond standard AI risks, you will analyze risks unique to agents, such as goal misalignment, reward hacking, infinite looping, and insecure tool execution (e.g., an agent executing a command that exceeds safety limits).
- Secure Agent Architecture & Safety Boundaries: Design secure‑by‑default architectures for autonomous agents. Crucially, this involves defining deterministic safety guardrails that sit between the probabilistic AI model and the physical hardware controls. Ensure "Zero Trust" applies to the agent—it should only have the minimum permissions needed to adjust specific parameters.
- Secure Agent Tools & Memory: Architect security controls for the "tools" the agent uses (APIs to read sensors or change settings) and the agent's long‑term memory. Ensure the agent cannot be manipulated into using a tool to perform unauthorized actions or "poisoned" via its memory context.
- MLSecOps for RL Pipelines: Secure the training and simulation pipelines used for Reinforcement Learning. Ensure the integrity of the simulation environments (Digital Twins) used to train agents, preventing attackers from influencing agent behavior during the training phase.
- Adversarial Testing & Red Teaming: Lead AI Red Teaming exercises focused on behavioral manipulation. Can you trick the agent into making a suboptimal decision? Can you manipulate the observations the agent receives?
- Incident Preparedness: Develop incident response playbooks tailored for autonomous systems, focusing on "Kill Switches" and rapid rollback capabilities in the event of rogue agent behavior.
- Cross‑Functional Partnership: Build strong relationships with the Agentic AI researchers, SREs, and Data Scientists. Act as an enabler who helps them deploy powerful agents safely, rather than a blocker.
Key Qualifications:
- Agentic AI & RL Security: Proven understanding of the security risks associated with Reinforcement Learning, Autonomous Agents, or automated decision‑making systems.
- AI Partnership: Demonstrated experience working embedded with AI system developers and researchers. You understand the difference between "probabilistic" (AI) and "deterministic" (Code) and how to secure the bridge between them.
- Core Experience: 5+ years of work experience in product security, application security, or a closely related security engineering role.
- Safety Engineering Mindset: You understand that in physical systems, "Availability" and "Safety" often outrank "Confidentiality." You are familiar with concepts like fail‑safes and human‑in‑the‑loop controls.
- Technical Depth: Strong programming experience, ideally with Python (essential for ML/AI ecosystems) or Go. Familiarity with agent frameworks (e.g., LangChain, AutoGPT) or RL libraries (e.g., Ray RLLib). Proven experience securing Cloud infrastructure (GCP) and Kubernetes. Deep understanding of Authentication & Authorization (specifically non‑human identities/workload identity).
- Advanced MLOps: Direct, hands‑on experience securing MLOps tooling (e.g., Kubeflow, MLflow) and deep understanding of securing complex data and model‑training pipelines.
Preferred Skills & Experience:
- Industrial / OT Context: Experience working with systems that interface with the physical world (IoT, Robotics, IC‑OT). Understanding of the "IT/OT convergence."
- Formal Verification for AI: Experience using mathematical methods to prove that an AI model or agent will not violate specific safety constraints.
- Sim‑to‑Real Security: Experience securing simulation environments (Digital Twins) and managing the security risks of transferring policies from simulation to the real world.
- Protocol Fuzzing: Ability to test industrial protocols (e.g., Modbus, BACnet) for robustness against automated or adversarial inputs.
- AI Governance: Familiarity with emerging standards like the NIST AI RMF or ISO 42001.
- Critical Systems: Experience securing "closed loops" or control systems where latency and reliability are critical.
- Certifications: Relevant advanced certifications, such as GICSP (Global Industrial Cyber Security Professional), ISA/IEC 62443 Cybersecurity Expert, NVIDIA Agentic AI, OSEP (Offensive Security Experienced Penetration Tester), CISSP, or OSCP.
Our Stack:
- AI/ML: PyTorch, TensorFlow, Ray (RL), LangChain, Gemini/OpenAI/Anthropic models.
- Languages: Python, Go.
- Infrastructure: Docker, Kubernetes, Terraform.
- Cloud: GCP (GKE, PubSub, BigTable).
Onboarding:
- In your first 30 days (Foundation and AI Landscape Familiarization): Understand the Mission: Deep dive into "Building AI for AI Factories." Understand the specific physical parameters (power, cooling, airflow) our agents are optimizing. Build Trust: Sit with the Agentic AI researchers to understand their workflow. How do they train agents? How do they simulate environments? Initial Review: Conduct a high‑level review of the current "Safety Layer" that sits between the Agent and the control systems.
- In your first 60 days (Threat Modeling & Guardrails): Agent Threat Model: Lead a detailed threat modeling session for a specific Agentic workflow. Focus on the Interface between the Agent and the Physical Hardware. Guardrail Implementation: Propose and begin implementing technical controls (guardrails) that enforce deterministic safety rules, ensuring the AI cannot exceed operational limits regardless of its intent. Secure the Tools: Review the security of the internal APIs (Tools) that the agents use to sense and act on the environment.
- In your first 90 days (Strategy & Automation): Reference Architecture: Publish a "Secure Agent Reference Architecture" for future agent development. Driving Initiatives: Drive the implementation of the secure reference architectures and remediation of key findings from the threat modeling exercises. Demonstrable Impact: Showcase measurable improvements in the security of the AI/ML pipeline (e.g., implementation of runtime monitoring for anomalous model behavior, reduction of AI‑specific vulnerabilities). Strategic Contributions: Establish yourself as the key security partner and expert for the Agentic AI Department.
General Interview Process:
- Meeting with People Operations team member (30 minutes)
- Meeting with Hiring Manager (30 minutes)
- Technical Interview with our Senior Product Security Engineer (60 minutes)
- Meeting with Agentic AI team member (30 minutes)
- Culture fit interview with Phaidra's co‑founders (30 minutes)
Base Salary:
- Tier 1 (London): 95,200 GBP - 142,000 GBP
- Tier 2 (Manchester, Birmingham, Edinburgh, Bristol): 89,600 GBP - 134,400 GBP
- Tier 3 (Other areas): 84,000 GBP - 126,000 GBP
Benefits & Perks:
- Fast‑paced, team‑oriented environment where your work directly shapes the company’s direction.
- We are a 100% remote company.
- Competitive compensation & meaningful equity.
- Outsized responsibilities & professional development.
- Training is foundational; functional, customer immersion, and development training.
- Medical, dental, and vision insurance (exact benefits vary by region).
- Unlimited paid time off, with a required minimum of 20 days per year.
- Paid parental leave (exact benefits vary by region).
- Flexible stipends to support your workspace, well‑being, and continued professional development.
- Company MacBook.
Please note: Not all of Phaidra's benefits and perks listed above apply to temporary employees such as interns.
On being Remote: We take a thoughtful and intentional approach to remote collaboration. Inspired by pioneers like GitLab, we embrace proven best practices to foster an exceptional remote work environment. Our culture is documentation‑first, and we prioritize asynchronous communication to support focus and flexibility across time zones. While we value independence, we stay closely connected through tools like Slack and video conferencing. Weekly all‑hands meetings help us align and build strong relationships, and we regularly host virtual team‑building activities and social events to maintain a sense of camaraderie.
Equal Opportunity Employment: Phaidra is an Equal Opportunity Employer; employment with Phaidra is governed on the basis of merit, competence, and qualifications and will not be influenced in any manner by race, color, religion, gender, national origin/ethnicity, veteran status, disability status, age, sexual orientation, gender identity, marital status, mental or physical disability, or any other legally protected status. We welcome diversity and strive to maintain an inclusive environment for all employees. If you need assistance with completing the application process, please contact us at hiring@phaidra.ai.
E-Verify Notice: Phaidra participates in E-Verify, an employment authorization database provided through the U.S. Department of Homeland Security (DHS) and Social Security Administration (SSA). As required by law, we will provide the SSA and, if necessary, the DHS, with information from each new employee's Form I-9 to confirm work authorization for those residing in the United States. Additional Information About E-Verify Can Be Found Here.
To be considered for any position at Phaidra, you must submit an online application. This role will remain open until it is filled. Phaidra only hires individuals who are legally authorized to work in the specified location(s) above. We do not provide employment sponsorship. Candidates requiring visa sponsorship, either now or in the future, are not eligible for hire. WE DO NOT ACCEPT APPLICATIONS FROM RECRUITERS.
Senior Product Security Engineer employer: Phaidra
Contact Detail:
Phaidra Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Product Security Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, especially those at Phaidra. Use LinkedIn or even Twitter to connect with current employees. A friendly message can go a long way in getting your foot in the door.
✨Tip Number 2
Prepare for those interviews! Research Phaidra's mission and values, and think about how your experience aligns with their focus on security in AI. Practice common interview questions and be ready to discuss your past projects in detail.
✨Tip Number 3
Show off your skills! If you have any relevant projects or contributions to open-source, make sure to highlight them. Consider creating a portfolio that showcases your work in product security and AI, as this can really impress the hiring team.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're genuinely interested in joining the Phaidra team. Don’t forget to follow up after applying to express your enthusiasm!
We think you need these skills to ace Senior Product Security Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Senior Product Security Engineer role. Highlight your experience with AI security, reinforcement learning, and any relevant projects that showcase your skills in securing autonomous systems.
Craft a Compelling Cover Letter: Your cover letter should tell us why you're passionate about security in AI and how your background aligns with our mission at Phaidra. Be genuine and let your personality shine through while keeping it professional.
Showcase Relevant Experience: In your application, emphasise your hands-on experience with securing MLOps tooling and cloud infrastructure. We want to see how you've tackled security challenges in past roles, especially in relation to AI and machine learning.
Apply Through Our Website: Don't forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team at Phaidra!
How to prepare for a job interview at Phaidra
✨Know Your Stuff
Before the interview, dive deep into Phaidra's mission and the specifics of the Senior Product Security Engineer role. Familiarise yourself with reinforcement learning, autonomous agents, and the unique security challenges they present. This will show your genuine interest and understanding of the position.
✨Showcase Your Experience
Prepare to discuss your past experiences in product security, especially those related to AI and ML. Be ready to share specific examples of how you've tackled security risks in similar environments. Highlight your technical skills, particularly in Python or Go, and any relevant certifications you hold.
✨Emphasise Collaboration
Phaidra values teamwork and collaboration. Be prepared to discuss how you've worked with cross-functional teams in the past, especially with AI researchers and engineers. Share examples of how you've successfully partnered with others to enhance security without being a blocker.
✨Ask Insightful Questions
At the end of the interview, have a few thoughtful questions ready. Inquire about the current security challenges the Agentic AI team is facing or how they envision the future of AI-driven security. This not only shows your interest but also helps you gauge if the company culture aligns with your values.