At a Glance
- Tasks: Build cutting-edge systems to monitor and govern autonomous AI agents in production.
- Company: Join governr, a pioneering AI risk platform for regulated enterprises.
- Benefits: Competitive salary, equity options, and a clear path to growth.
- Why this job: Make a real impact on AI safety and governance while working with top-tier financial institutions.
- Qualifications: Expertise in Python, Rust, or Go, with experience in production systems and AI safety.
- Other info: Collaborate with industry leaders and gain expertise in AI architectures and regulatory frameworks.
The predicted salary is between 36000 - 60000 £ per year.
We are building the infrastructure that makes autonomous AI safe for enterprise deployment. Not governance theatre. Not compliance checkboxes. Actual technical systems that can monitor, quantify, and govern AI agents operating with autonomy in production environments. If you have been following the trajectory from static models to agentic systems—and the corresponding explosion in risk surface area—you know why this matters now.
governr is the AI risk platform for regulated enterprises. We provide complete AI visibility, real-time risk evaluation and quantification, and audit-ready compliance documents for enterprises deploying agentic AI. We have built the industry’s most comprehensive AI risk assessment framework: We are currently in active discussions with tier-1 financial institutions and have secured design partners with leading firms navigating the shift from analytical AI to agentic systems.
The market timing is critical: enterprises are deploying agents at scale, regulators are demanding governance frameworks, and existing Third-Party Risk Management (TPRM) platforms have near-zero AI-risk depth. As an Agentic Developer at governr, you will build the core systems that monitor, analyse, and govern autonomous AI agents in production. This isn’t traditional software engineering with well-worn patterns.
- Agent Monitoring Infrastructure: Build real-time systems that track agent behaviour across risk factors including agent-to-agent interactions, privilege escalation attempts, emergent capability detection, and behavioural drift from baseline parameters.
- Risk Assessment Engine: Design algorithms that quantify AI risk in financial terms (£X exposure), map behaviours to regulatory requirements, and detect cascade failures before they propagate through multi-agent systems.
- Implement monitoring for agent communication protocols (Model Context Protocol, Agent2Agent, Agent Connect Protocol) as they mature, ensuring authenticated and logged inter-agent communications.
- Develop ML systems that identify when agents exhibit unexpected strategies, attempt unauthorized actions, or drift from intended behaviour—catching issues before regulators do.
- Build architectures that automatically capture audit trails, decision provenance, and compliance evidence without impacting agent performance.
Technical Depth:
- 3 - 12+ years building production systems at scale.
- Expert-level proficiency in Python, Rust, or Go (you write systems that can’t fail).
- Deep understanding of distributed systems, real-time data processing, and observability architectures.
- Production ML/AI experience: You have deployed models, debugged their failures, and built monitoring around them.
- System design mastery.
Domain Knowledge:
- Understanding of agent architectures: autonomous decision-making, goal-directed behaviour, tool use, memory systems.
- Familiarity with AI safety concepts: alignment, interpretability, robustness, adversarial examples.
- Experience with monitoring/observability: instrumentation, logging, tracing, alerting in complex systems.
Working Style:
- You ship to production regularly and own what you deploy.
- You write documentation that others can actually use.
- You thrive in ambiguity and define requirements through first principles.
- You communicate technical concepts clearly to non-technical stakeholders.
Publications or research:
- In multi-agent systems, reinforcement learning, AI safety, or agent architectures.
- Experience at AI labs (Anthropic, OpenAI, DeepMind) or leading AI research groups.
- Financial services, healthcare, or other domains with compliance requirements.
- Understanding of adversarial AI, prompt injection, or system security.
- Real-time systems experience: Trading systems, fraud detection, or other low-latency critical infrastructure.
- Open source contributions in relevant domains (AI frameworks, monitoring tools, security infrastructure).
Bonus Points (Nice to Have):
- Understanding of regulatory frameworks (EU AI Act, California AI, GDPR, DORA, FCA, OCC, FINRA guidance).
- You read AI safety papers for fun. You are intellectually curious about the intersection of AI capabilities and regulatory constraints.
- You find it genuinely interesting that the EU AI Act requires "human oversight" but doesn’t define what that means for autonomous agents.
- You believe AI agents will transform how organizations operate, but only if we solve the governance problem.
Autonomous AI is moving from research to production. 4T annual value from agentic AI. We are building it. Financial quantification of non-deterministic systems. You will help define what "AI governance" means.
Team Quality: Co-founders with deep financial services + AI expertise. Ayman Hindy, Marcel Cassard, and leading figures in AI, high frequency risk management and financial regulation.
Learning Curve: You will gain expertise in cutting-edge AI architectures, enterprise software, regulatory frameworks, and category creation simultaneously. Every system you build enables safe AI adoption for enterprises managing billions in assets.
Compensation: very competitive salary + equity in a fast-growing company with clear path to Series A.
Build the infrastructure that makes autonomous AI safe for society. Not many teams can say their technical work has direct regulatory impact. If you are excited about building unprecedented monitoring systems for autonomous agents, working at the intersection of AI safety and enterprise software, and defining an emerging category—let’s talk.
Include:
- Links to relevant work (code, papers, projects, or systems you have built).
- What you are currently reading/learning in the AI agent space.
- One technical challenge you would be excited to solve at governr.
Ready to build the governance layer for autonomous AI?
Performance Developer in England employer: governr
Contact Detail:
governr Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Performance Developer in England
✨Tip Number 1
Network like a pro! Get out there and connect with folks in the AI and tech space. Attend meetups, webinars, or conferences where you can chat with industry leaders and potential colleagues. You never know who might have the inside scoop on job openings!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those related to AI risk management or monitoring systems. Share it on platforms like GitHub or your personal website. This gives employers a taste of what you can do before they even meet you.
✨Tip Number 3
Prepare for interviews by diving deep into the company’s mission and values. Understand their approach to AI safety and governance. When you can speak their language and show genuine interest, you’ll stand out as a candidate who truly gets what they’re about.
✨Tip Number 4
Don’t just apply through job boards—hit up our website directly! Tailor your application to highlight how your experience aligns with the role of Performance Developer at governr. A direct application shows initiative and enthusiasm, which we love to see!
We think you need these skills to ace Performance Developer in England
Some tips for your application 🫡
Show Your Passion for AI Safety: When you're writing your application, let your enthusiasm for AI safety shine through! Talk about why you care about the intersection of AI and governance, and how you see it shaping the future. We want to know what drives you!
Highlight Relevant Experience: Make sure to showcase your experience with production systems, especially in AI or financial services. Mention specific projects or systems you've built that relate to monitoring or risk assessment. This helps us see how you fit into our mission.
Be Clear and Concise: We appreciate clarity! When describing your skills and experiences, keep it straightforward and to the point. Avoid jargon unless it's necessary, and remember, we want to understand your technical expertise without getting lost in the details.
Include Links to Your Work: Don't forget to share links to your relevant work, whether it's code, papers, or projects. This gives us a chance to see your skills in action. Plus, it shows us you're proud of what you've accomplished—so go ahead and brag a little!
How to prepare for a job interview at governr
✨Know Your Stuff
Make sure you brush up on your knowledge of AI safety concepts and agent architectures. Be ready to discuss how your experience with Python, Rust, or Go can contribute to building robust systems that monitor and govern autonomous AI agents.
✨Showcase Your Projects
Bring along links to relevant work you've done, whether it's code, papers, or systems you've built. This is your chance to demonstrate your hands-on experience in production systems and real-time data processing, so make it count!
✨Understand the Market
Familiarise yourself with the current landscape of AI governance and regulatory frameworks like the EU AI Act. Being able to discuss how these regulations impact the development of AI systems will show that you're not just technically savvy but also aware of the bigger picture.
✨Ask Insightful Questions
Prepare some thoughtful questions about the company's approach to AI risk management and the challenges they face. This shows your genuine interest in the role and helps you gauge if the company aligns with your values and career goals.