Production Developer in England

Production Developer in England

England Full-Time 36000 - 60000 £ / year (est.) No home office possible
G

At a Glance

  • Tasks: Build cutting-edge systems to monitor and govern autonomous AI agents in production.
  • Company: Join governr, a pioneering AI risk platform for regulated enterprises.
  • Benefits: Competitive salary, equity options, and a clear path to growth.
  • Why this job: Make a real impact on AI safety and governance while working with industry leaders.
  • Qualifications: Expertise in Python, Rust, or Go, with experience in production systems and AI safety.
  • Other info: Collaborate with top minds in AI and finance, shaping the future of autonomous technology.

The predicted salary is between 36000 - 60000 £ per year.

We are building the infrastructure that makes autonomous AI safe for enterprise deployment. Not governance theatre. Not compliance checkboxes. Actual technical systems that can monitor, quantify, and govern AI agents operating with autonomy in production environments. If you have been following the trajectory from static models to agentic systems—and the corresponding explosion in risk surface area—you know why this matters now.

governr is the AI risk platform for regulated enterprises. We provide complete AI visibility, real-time risk evaluation and quantification, and audit-ready compliance documents for enterprises deploying agentic AI. We have built the industry’s most comprehensive AI risk assessment framework: We are currently in active discussions with tier-1 financial institutions and have secured design partners with leading firms navigating the shift from analytical AI to agentic systems.

The market timing is critical: enterprises are deploying agents at scale, regulators are demanding governance frameworks, and existing Third-Party Risk Management (TPRM) platforms have near-zero AI-risk depth. As an Agentic Developer at governr, you will build the core systems that monitor, analyse, and govern autonomous AI agents in production. This isn’t traditional software engineering with well-worn patterns.

  • Agent Monitoring Infrastructure: Build real-time systems that track agent behaviour across risk factors including agent-to-agent interactions, privilege escalation attempts, emergent capability detection, and behavioural drift from baseline parameters.
  • Risk Assessment Engine: Design algorithms that quantify AI risk in financial terms (£X exposure), map behaviours to regulatory requirements, and detect cascade failures before they propagate through multi-agent systems.
  • Implement monitoring for agent communication protocols (Model Context Protocol, Agent2Agent, Agent Connect Protocol) as they mature, ensuring authenticated and logged inter-agent communications.
  • Develop ML systems that identify when agents exhibit unexpected strategies, attempt unauthorized actions, or drift from intended behaviour—catching issues before regulators do.
  • Build architectures that automatically capture audit trails, decision provenance, and compliance evidence without impacting agent performance.

Technical Depth:

  • 3 - 12+ years building production systems at scale.
  • Expert-level proficiency in Python, Rust, or Go (you write systems that can’t fail).
  • Deep understanding of distributed systems, real-time data processing, and observability architectures.
  • Production ML/AI experience: You have deployed models, debugged their failures, and built monitoring around them.
  • System design mastery.

Domain Knowledge:

  • Understanding of agent architectures: autonomous decision-making, goal-directed behaviour, tool use, memory systems.
  • Familiarity with AI safety concepts: alignment, interpretability, robustness, adversarial examples.
  • Experience with monitoring/observability: instrumentation, logging, tracing, alerting in complex systems.

Working Style:

  • You ship to production regularly and own what you deploy.
  • You write documentation that others can actually use.
  • You thrive in ambiguity and define requirements through first principles.
  • You communicate technical concepts clearly to non-technical stakeholders.

Publications or research:

  • In multi-agent systems, reinforcement learning, AI safety, or agent architectures.
  • Experience at AI labs (Anthropic, OpenAI, DeepMind) or leading AI research groups.
  • Financial services, healthcare, or other domains with compliance requirements.
  • Understanding of adversarial AI, prompt injection, or system security.
  • Real-time systems experience: Trading systems, fraud detection, or other low-latency critical infrastructure.
  • Open source contributions in relevant domains (AI frameworks, monitoring tools, security infrastructure).

Bonus Points (Nice to Have):

  • Understanding of regulatory frameworks (EU AI Act, California AI, GDPR, DORA, FCA, OCC, FINRA guidance).
  • You read AI safety papers for fun. You are intellectually curious about the intersection of AI capabilities and regulatory constraints.
  • You find it genuinely interesting that the EU AI Act requires "human oversight" but doesn’t define what that means for autonomous agents.
  • You believe AI agents will transform how organizations operate, but only if we solve the governance problem.
  • Autonomous AI is moving from research to production.
  • 4T annual value from agentic AI. We are building it.
  • Financial quantification of non-deterministic systems. You will help define what "AI governance" means.

Team Quality: Co-founders with deep financial services + AI expertise. Ayman Hindy, Marcel Cassard, and leading figures in AI, high frequency risk management and financial regulation.

Learning Curve: You will gain expertise in cutting-edge AI architectures, enterprise software, regulatory frameworks, and category creation simultaneously. Every system you build enables safe AI adoption for enterprises managing billions in assets.

Compensation: very competitive salary + equity in a fast-growing company with clear path to Series A.

Build the infrastructure that makes autonomous AI safe for society. Not many teams can say their technical work has direct regulatory impact. If you are excited about building unprecedented monitoring systems for autonomous agents, working at the intersection of AI safety and enterprise software, and defining an emerging category—let’s talk.

Include:

  • Links to relevant work (code, papers, projects, or systems you have built).
  • What you are currently reading/learning in the AI agent space.
  • One technical challenge you would be excited to solve at governr.

Ready to build the governance layer for autonomous AI?

Production Developer in England employer: governr

At governr, we are at the forefront of AI risk management, providing a dynamic work environment that fosters innovation and collaboration. Our team is composed of industry experts who are passionate about building cutting-edge systems that ensure the safe deployment of autonomous AI in enterprise settings. With competitive salaries, equity opportunities, and a commitment to employee growth, we empower our developers to thrive while making a meaningful impact on the future of AI governance.
G

Contact Detail:

governr Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Production Developer in England

✨Tip Number 1

Network like a pro! Get out there and connect with folks in the AI and tech space. Attend meetups, webinars, or conferences where you can chat with industry leaders and potential colleagues. You never know who might have the inside scoop on job openings!

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your projects, especially those related to AI and risk management. Share your code on GitHub or write about your experiences in blogs. This gives us a chance to see your expertise in action and makes you stand out.

✨Tip Number 3

Prepare for interviews by diving deep into the company’s mission and values. Understand how your skills align with their goals, especially around AI safety and governance. We love candidates who can articulate how they can contribute to our vision!

✨Tip Number 4

Don’t just apply anywhere—apply through our website! Tailor your application to highlight your experience with autonomous systems and risk assessment. We’re looking for passionate individuals who are ready to make an impact in the AI space!

We think you need these skills to ace Production Developer in England

Python
Rust
Go
Distributed Systems
Real-Time Data Processing
Observability Architectures
Machine Learning
AI Safety Concepts
Monitoring and Observability
System Design
Technical Documentation
Communication Skills
Regulatory Frameworks Understanding
Agent Architectures Knowledge
Analytical Skills

Some tips for your application 🫡

Show Your Passion for AI Safety: When you're writing your application, let your enthusiasm for AI safety shine through! Talk about why you care about the intersection of AI and governance, and how you see it shaping the future. We want to know what drives you!

Highlight Relevant Experience: Make sure to showcase your experience with production systems, especially in AI or financial services. Mention specific projects or systems you've built that relate to monitoring or risk assessment. This helps us see how you fit into our mission.

Be Clear and Concise: We appreciate clarity! When describing your skills and experiences, keep it straightforward and to the point. Avoid jargon unless it's necessary, and remember, we want to understand your journey without getting lost in technical details.

Include Links to Your Work: Don’t forget to share links to your relevant work, whether it's code, papers, or projects. This gives us a chance to see your skills in action. Plus, it shows us you're proud of what you've accomplished—so go ahead and brag a little!

How to prepare for a job interview at governr

✨Know Your Stuff

Make sure you have a solid grasp of the technical skills required for the role, especially in Python, Rust, or Go. Brush up on your knowledge of distributed systems and real-time data processing, as these will likely come up during the interview.

✨Understand AI Safety Concepts

Familiarise yourself with AI safety concepts like alignment, interpretability, and robustness. Be prepared to discuss how these concepts apply to the role and the importance of governance in autonomous AI systems.

✨Showcase Your Experience

Bring examples of your past work that demonstrate your ability to build production systems at scale. Highlight any experience you have with monitoring and observability in complex systems, as this is crucial for the position.

✨Ask Insightful Questions

Prepare thoughtful questions about the company's approach to AI governance and the challenges they face. This shows your genuine interest in the role and helps you understand if it's the right fit for you.

Production Developer in England
governr
Location: England

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

G
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>