Principal Governance, Risk, and Compliance Engineer in London
Principal Governance, Risk, and Compliance Engineer

Principal Governance, Risk, and Compliance Engineer in London

London Full-Time 55000 - 95000 £ / year (est.) No home office possible
A

At a Glance

  • Tasks: Lead governance, risk, and compliance engineering for cutting-edge AI systems.
  • Company: Join the world's largest team focused on advanced AI risks and safety.
  • Benefits: Competitive salary, generous leave, hybrid working, and professional development opportunities.
  • Why this job: Make a real impact on AI governance while working with top experts and resources.
  • Qualifications: Experience in compliance, risk management, and familiarity with AI systems.
  • Other info: Dynamic environment with opportunities for growth and collaboration across sectors.

The predicted salary is between 55000 - 95000 £ per year.

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

About the Team: Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, treating security as a measurable, researcher-centric product. Secure by design platforms, automated governance, and intelligence-led detection protect our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, optimising for enablement over gatekeeping, proportionate controls, low ego, and high ownership.

What you might work on:

  • Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely.
  • Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility).
  • Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale.
  • Develop automated, evidence-driven assurance mapped to relevant standards, reducing audit toil and improving signal.
  • Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them.
  • Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer.
  • Assess third-party services and hardware/software supply chains; introduce lightweight controls that raise the bar.
  • Contribute to open standards and open source, and share lessons with the broader community where appropriate.

If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it.

Role Summary: Own and operationalise AISI's governance, risk, and compliance (GRC) engineering practice. This role sits at the intersection of security engineering, assurance, and policy, turning paper-based requirements into actionable, testable, and automatable controls. You will lead the technical response to GovAssure and other regulatory requirements, ensuring compliance is continuous and evidence-driven. You will also extend GRC disciplines to frontier AI systems, integrating model lifecycle artefacts, evaluations, and release gates into the control and evidence pipeline.

Responsibilities:

  • Translate regulatory frameworks (e.g. GovAssure, CAF) into programmatic controls and technical artefacts.
  • Build and maintain a continuous control validation and evidence pipeline.
  • Develop and own a capability-based risk management approach aligned to AISI's delivery model.
  • Maintain the AISI risk register and risk acceptance/exception handling process.
  • Act as the key interface for DSIT governance, policy, and assurance stakeholders.
  • Work cross-functionally to ensure risk and compliance are embedded into AISI delivery lifecycles.
  • Extend controls and evidence to the frontier AI model.
  • Integrate AI safety evidence (e.g., model/dataset documentation, evaluations, red-team results, release gates) into automated compliance workflows.
  • Define and implement controls for model weights handling, compute governance, third-party model/API usage, and model misuse/abuse monitoring.
  • Support readiness for AI governance standards and regulations (e.g., NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894; EU AI Act exposure where relevant).

Profile Requirements:

  • Staff or Principal-level engineer or technical GRC specialist.
  • Experience in compliance-as-code, control validation, or regulated cloud environments.
  • Familiar with YAML, GitOps, structured artefacts, and automated policy checks.
  • Equally confident in engineering meetings and policy/gov forums.
  • Practical understanding of frontier AI system risks and artefacts (e.g., model evaluations, red-teaming, model/dataset documentation, release gating, weights handling) sufficient to translate AI policy into controls and machine-checkable evidence.
  • Desirable: familiarity with MLOps tooling (e.g., experiment tracking, model registries) and integrating ML artefacts into CI/CD or evidence pipelines.
  • Translating policy into technical controls.
  • Designing controls as code or machine-checkable evidence.
  • Familiarity with frameworks (GovAssure, CAF, NIST) and AI governance standards (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894).
  • Experience building risk management workflows, including for AI-specific risks (model misuse, capability escalation, data/weights security).
  • Stakeholder engagement with governance teams and AI/ML engineering teams.

What We Offer:

  • Impact you couldn't have anywhere else.
  • Incredibly talented, mission-driven and supportive colleagues.
  • Direct influence on how frontier AI is governed and deployed globally.
  • Work with the Prime Minister's AI Advisor and leading AI companies.
  • Opportunity to shape the first & best-resourced public-interest research team focused on AI security.
  • Resources & access: Pre-release access to multiple frontier models and ample compute.
  • Extensive operational support so you can focus on research and ship quickly.
  • Work with experts across national security, policy, AI research and adjacent sciences.
  • If you're talented and driven, you'll own important problems early.
  • 5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations.
  • Freedom to pursue research bets without product pressure.
  • Opportunities to publish and collaborate externally.

Life & Family:

  • Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.
  • Hybrid working, flexibility for occasional remote work abroad and stipends for work-from-home equipment.
  • At least 25 days' annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering.
  • Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
  • On top of your salary, we contribute 28.97% of your base salary to your pension.
  • Discounts and benefits for cycling to work, donations and retail/gyms.
  • Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top.

This role sits outside of the DDaT pay framework given the scope of this role requires in-depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

We are committed to providing equal opportunities and promoting diversity and inclusion for all applicants.

Principal Governance, Risk, and Compliance Engineer in London employer: Aisafety

The AI Security Institute is an exceptional employer, offering a unique opportunity to work at the forefront of AI governance and compliance in a dynamic and supportive environment. With direct access to influential stakeholders, including the Prime Minister's office, employees benefit from a culture that prioritises innovation, collaboration, and personal growth, alongside generous leave policies and professional development support. Located in modern offices in central London or various UK cities, the Institute fosters a flexible work-life balance, making it an ideal place for those passionate about shaping the future of AI safety.
A

Contact Detail:

Aisafety Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Principal Governance, Risk, and Compliance Engineer in London

✨Tip Number 1

Network like a pro! Reach out to folks in the AI and security space, especially those connected to the AI Security Institute. Attend events, webinars, or even local meetups to get your name out there and make those valuable connections.

✨Tip Number 2

Show off your skills! Create a portfolio that highlights your experience with governance, risk, and compliance in AI. Share projects or contributions to open-source initiatives that demonstrate your expertise and passion for the field.

✨Tip Number 3

Prepare for interviews by diving deep into the latest trends in AI safety and compliance. Familiarise yourself with frameworks like GovAssure and NIST AI RMF, so you can confidently discuss how you’d tackle challenges in the role.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets the attention it deserves. Plus, we love seeing candidates who are proactive about their job search!

We think you need these skills to ace Principal Governance, Risk, and Compliance Engineer in London

Governance, Risk, and Compliance (GRC)
Compliance-as-Code
Control Validation
Regulated Cloud Environments
YAML
GitOps
Automated Policy Checks
AI Safety Evidence Integration
Risk Management Workflows
Stakeholder Engagement
Understanding of Regulatory Frameworks (e.g., GovAssure, CAF, NIST)
Familiarity with AI Governance Standards (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894)
Technical Artefact Development
Model Lifecycle Management

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter to reflect the specific skills and experiences that align with the Principal Governance, Risk, and Compliance Engineer role. Highlight your familiarity with frameworks like GovAssure and NIST, as well as any experience in compliance-as-code.

Showcase Your Technical Skills: Don’t shy away from showcasing your technical expertise! Mention your experience with YAML, GitOps, and any relevant MLOps tooling. We want to see how you can translate policy into actionable controls, so give us examples of your past work.

Be Clear and Concise: When writing your application, keep it clear and to the point. Use bullet points where possible to make your achievements stand out. We appreciate straightforward communication, especially when it comes to complex topics like AI governance.

Apply Through Our Website: We encourage you to apply directly through our website. This ensures your application gets to the right people quickly. Plus, it’s a great way to show your enthusiasm for joining our team at the AI Security Institute!

How to prepare for a job interview at Aisafety

✨Know Your Regulations

Familiarise yourself with key regulatory frameworks like GovAssure and NIST AI RMF. Be ready to discuss how you would translate these into actionable controls, as this will show your understanding of the compliance landscape.

✨Showcase Your Technical Skills

Be prepared to demonstrate your experience with compliance-as-code and control validation. Bring examples of how you've used tools like YAML and GitOps in previous roles to automate policy checks and ensure compliance.

✨Engage with Stakeholders

Highlight your ability to work cross-functionally. Discuss past experiences where you've collaborated with governance teams and engineering units to embed risk and compliance into delivery lifecycles.

✨Prepare for Scenario Questions

Expect scenario-based questions that assess your problem-solving skills in real-world situations. Think about how you would handle risks related to AI systems, such as model misuse or data security, and be ready to articulate your thought process.

Principal Governance, Risk, and Compliance Engineer in London
Aisafety
Location: London

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

A
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>