At a Glance
- Tasks: Dive into AI risk management and help shape the future of AI accountability.
- Company: Join governr, a pioneering firm at the forefront of AI risk and control.
- Benefits: Enjoy meaningful equity, competitive salary, and direct exposure to founders.
- Why this job: Be part of a groundbreaking team tackling one of today's biggest enterprise challenges.
- Qualifications: Experience in AI/ML systems, risk assessment, and technical auditing is essential.
- Other info: Gain varied experience across customer success, operations, and market activities.
The predicted salary is between 60000 - 80000 £ per year.
About governr: governr is the control and accountability infrastructure for AI in regulated enterprises - the system of record for a firm’s entire AI estate, with 24/7 risk exposure management and regulatory adherence. We were founded by operators who’ve built risk and control stacks for high frequency trading, volatility and tail-risk management applied to non-deterministic AI at scale. Our advisory panel includes formal global heads of capital markets, risk management, AI deployment, and country-level leadership of critical security infrastructures.
The moment every regulated firm on earth is required - by their board, regulator, and insurer - to prove they have control over their AI. Most cannot. The EU AI Act, the FCA’s PS26/2, FINRA, MAS, SEC, Freddie Mac, HIPAA, MHRA, MOD, DORA, GDPR 22, Consumer Duty, FRC, SMCR - every regulatory jurisdiction across industry segments demands this. Makes AI accountability personal. Insurers are pulling cover. This is the Wiz moment for AI - the control and context gap of AI turns into a category-defining platform.
The product governr combines 60 AI-specific risk factors, 80 agent-level controls, and 850 regulatory mappings into a single control panel. Every AI system, model, agent, and supplier in a firm’s estate - risk managed, monitored, and auditable with AI-native automation. It is the future language of how AI deployment, trust, and risk is answerable in minutes, not months. It is the first and only platform purpose-built for the AI control problem regulated enterprises now face.
Practical experience: Worked in or around AI/ML teams in a delivery or assurance capacity. You may not have been necessarily a builder, but someone who has been close enough to data pipelines, model deployment, and API layers to understand what 'done' looks like. A background in technology risk, ML engineering, or technical consulting with AI exposure all work. Three to five years. You'll have seen enough real AI estates to recognise what Gap and Weak actually look like in practice, because a lot of the value in a guidance engagement is pattern recognition: spotting that a firm's 'data quality process' is actually just a Notion doc someone wrote once, or that their 'rate limiting' is a single nginx config that nobody has reviewed.
Specific things you would be able to do:
- Read and interpret a data pipeline well enough to identify where lineage tracking, PII scanning, or quality checks are missing.
- Review model evaluation results and know whether HELM benchmarks have been run properly or just ticked off.
- Assess whether an agent's system prompt actually enforces the iteration limits and tool restrictions the Baseline requires.
- Look at an API auth setup and know whether OAuth2 has been implemented correctly or just partially.
- Identify whether logging is genuinely structured and centralised or just console output someone has called 'logs.'
You don't need to build any of these things. If you do, that's a bonus. You need to know what a properly implemented version looks like versus a superficial one.
Useful background: You've held a role that required reviewing or auditing technical AI or ML systems, not just advising on strategy. You've written technical risk assessments or assurance reports that engineering teams found credible and useful, not just governance teams. You are comfortable facilitating workshops with mixed technical and non-technical audiences. Familiar with at least one cloud provider's data and ML tooling stack well enough to know where common gaps appear. Some exposure to EU AI Act, GDPR, or NIST AI RMF is useful but not essential at Baseline. Governance and policy - you may have written operational policies and procedures that actually got used, not just filed. Be it retention policy, Data protection impact assessments or an AI governance framework which went through legal or regulatory scrutiny and held up. Background in technology governance, data protection, compliance, or risk management.
What you get:
- Meaningful equity.
- A salary competitive for the stage.
- Direct founder exposure: You will work closely with the people building the company and have a real seat at the table with senior industry leading AI, risk, data and technology practitioners and researchers in agentic, security management, context graphs.
- Real responsibility early: This is a chance to take ownership, not just observe.
- High-value customer exposure: You will interact with serious, senior stakeholders at important firms.
- Career-building topic area: AI risk and governance is one of the defining enterprise issues of this era.
- Front-row seat to building a category: You will see how a category gets built and business buildout.
- Varied work: The role cuts across customer success, GTM, operations, and market-facing activity, so there is room to learn fast.
How to apply: Send a short note explaining, in your own words, why this role and why now. No cover letter template. No CV tricks. Tell us your experience. Be ready to talk over a 20-min coffee or a call over the weekend or in the evening.
AI Risk Practitioner in City of London employer: governr
Contact Detail:
governr Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AI Risk Practitioner in City of London
✨Tip Number 1
Get to know the company inside out! Research governr's mission, values, and recent projects. This will help you tailor your conversations and show that you're genuinely interested in what they do.
✨Tip Number 2
Network like a pro! Reach out to current or former employees on LinkedIn. Ask them about their experiences and any tips they might have for landing a role at governr. Personal connections can make a huge difference!
✨Tip Number 3
Prepare for that coffee chat! Think about how your experience aligns with the role of AI Risk Practitioner. Be ready to discuss specific examples of your work with AI/ML systems and how you've tackled risk management challenges.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen. Plus, it shows you're serious about joining the team. Don’t forget to follow up after applying to keep your name fresh in their minds!
We think you need these skills to ace AI Risk Practitioner in City of London
Some tips for your application 🫡
Be Yourself: When you write your application, let your personality shine through! We want to know who you are and why you're excited about this role. Don't be afraid to share your passion for AI risk and governance.
Keep It Relevant: Focus on your experience that directly relates to the job. Highlight any work you've done with AI/ML teams or in technology risk. We’re looking for specific examples that show you understand the nuances of AI estates.
Skip the Formalities: Forget about stiff cover letters and CV tricks. Just send us a short note explaining why this role is right for you and why now is the time to jump in. We appreciate authenticity over formality!
Get Ready for a Chat: Be prepared to discuss your experiences in a casual chat. We love getting to know candidates over a coffee or a call, so think about what you want to share and how it connects to the role.
How to prepare for a job interview at governr
✨Know Your AI Risk Fundamentals
Before the interview, brush up on the key concepts of AI risk management and regulatory frameworks like the EU AI Act and GDPR. Being able to discuss these topics confidently will show that you understand the landscape and can contribute meaningfully.
✨Demonstrate Practical Experience
Prepare specific examples from your past roles where you've identified gaps in AI systems or contributed to risk assessments. This will help you illustrate your hands-on experience and pattern recognition skills, which are crucial for this role.
✨Familiarise Yourself with Their Product
Take some time to understand governr's platform and how it integrates various AI risk factors and controls. Being able to speak about their product and how it addresses industry challenges will set you apart as a candidate who is genuinely interested.
✨Engage in Meaningful Dialogue
During the interview, ask insightful questions about their approach to AI governance and risk management. This not only shows your interest but also demonstrates your ability to engage with both technical and non-technical stakeholders effectively.