At a Glance
- Tasks: Ensure secure and compliant AI development while collaborating with various teams.
- Company: Join a leading financial services firm focused on innovative AI solutions.
- Benefits: Enjoy hybrid work, competitive pay, and opportunities for professional growth.
- Why this job: Be at the forefront of ethical AI, making a real impact in the industry.
- Qualifications: Strong background in AI/ML, security, and compliance is essential.
- Other info: Initial 6-month contract with potential for extension.
The predicted salary is between 48000 - 84000 £ per year.
My Financial Services client is seeking to recruit a AI Security & Compliance Engineer / Specialist on an initial 6 month contract based in London. It is hybrid and will require 3x days onsite per week. You will ensure the secure, ethical, and compliant development of AI solutions across the organisation. This role is central to embedding security, privacy, and regulatory controls into the design and engineering of AI products-including Microsoft Copilot, custom AI agents, and broader generative AI applications. You will work closely with engineering, architecture, legal, security and risk teams to define and implement controls across the AI lifecycle, ensuring alignment with internal policies and external regulations such as the EU AI Act, FCA guidance, and GDPR. A key part of this role involves leveraging Microsoft Purview to enforce data governance, classification, and compliance across AI systems. You will also collaborate with the AI Governance Lead to assess and support the onboarding of new AI systems into the bank, ensuring that all solutions meet the required standards for security, transparency, and regulatory compliance. Accountabilities & Responsibilities Secure AI Engineering & Design Collaboration Partner with engineering teams to embed security-by-design and privacy-by-design principles into AI agents, copilots, and automation workflows. Define and implement technical controls for: Data access and protection Model transparency and explainability Human oversight and fallback mechanisms Audit logging and traceability AI Risk & Compliance Architecture Design and enforce compliance frameworks for high-risk AI systems, aligned with the EU AI Act, FCA/PRA AI Principles, and ISO/IEC 42001. Conduct technical risk assessments on AI use cases, focusing on model behaviour, data governance, and user interaction. Collaborate on the development of model cards, risk registers, and post-market monitoring plans. Microsoft Purview Integration Use Microsoft Purview to implement and manage: Data classification and sensitivity labels Data loss prevention (DLP) policies Information protection and access controls Compliance reporting and audit trails for AI-related data flowsAI System Onboarding & Governance Support Work with the AI Governance Lead to assess new AI systems being introduced into the bank. Evaluate solutions for compliance with internal policies and external regulations. Provide technical input on risk mitigation strategies and onboarding documentation. Security & DevSecOps Integration Integrate AI security controls into CI/CD pipelines and MLOps workflows. Use tools such as Azure Key Vault, Microsoft Entra ID, and GitHub Actions for secure deployment and access management. Monitor AI systems using Azure Monitor, Log Analytics, and Application Insights. Policy Implementation & Regulatory Alignment Translate regulatory requirements into actionable engineering guidelines and reusable controls. Ensure AI systems avoid prohibited practices and meet obligations around: Transparency and user awareness Data minimisation and lawful processing Continuous monitoring and incident response Cross-Functional Collaboration & Governance Partner with legal, compliance, and architecture teams to align AI development with enterprise risk and governance frameworks. Contribute to internal working groups on Responsible AI, AI governance, and ethical design. Educate stakeholders on emerging AI risks and mitigation strategies. Qualification and skills: Strong technical background in AI/ML systems, with experience embedding security and compliance into product design. Expert-level knowledge of Microsoft Purview for data governance, classification, and compliance. Familiarity with AI governance frameworks (e.g., NIST AI RMF, ISO/IEC 42001, Microsoft Responsible AI Standard). Hands-on experience with: Azure AI services, Microsoft Copilot Studio, and Power Platform Secure deployment tools (e.g., Azure Key Vault, RBAC, CI/CD pipelines) Data protection and privacy controls (e.g., DLP, masking, classification) Knowledge of regulatory frameworks including the EU AI Act, GDPR, and FCA guidance. Experience working in cross-functional teams across engineering, legal, and risk domains. Excellent communication and documentation skills, with the ability to translate complex requirements into technical solutions
AI Security & Compliance Engineer employer: Adecco
Contact Detail:
Adecco Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AI Security & Compliance Engineer
✨Tip Number 1
Familiarise yourself with the latest regulations and frameworks related to AI, such as the EU AI Act and GDPR. Understanding these will not only help you in interviews but also demonstrate your commitment to compliance and security in AI.
✨Tip Number 2
Network with professionals in the AI security and compliance field. Attend relevant meetups or webinars to connect with others who work in this area, as they may provide insights or even referrals that could help you land the job.
✨Tip Number 3
Showcase your hands-on experience with Microsoft Purview and other relevant tools in your discussions. Being able to talk about specific projects where you've implemented data governance or compliance measures can set you apart from other candidates.
✨Tip Number 4
Prepare to discuss how you've collaborated with cross-functional teams in the past. Highlighting your ability to work with engineering, legal, and risk teams will demonstrate that you can effectively contribute to the collaborative environment at StudySmarter.
We think you need these skills to ace AI Security & Compliance Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in AI security and compliance. Focus on your technical background in AI/ML systems and any hands-on experience with Microsoft Purview, Azure services, and regulatory frameworks like the EU AI Act and GDPR.
Craft a Strong Cover Letter: In your cover letter, express your passion for AI security and compliance. Mention specific projects or experiences where you successfully embedded security and compliance into product design, and how you can contribute to the company's goals.
Highlight Cross-Functional Collaboration: Emphasise your experience working with cross-functional teams, particularly in engineering, legal, and risk domains. Provide examples of how you've collaborated with different stakeholders to align AI development with governance frameworks.
Showcase Communication Skills: Since excellent communication is key for this role, include examples of how you've effectively communicated complex technical requirements to non-technical stakeholders. This will demonstrate your ability to educate and inform others about emerging AI risks and mitigation strategies.
How to prepare for a job interview at Adecco
✨Understand the Regulatory Landscape
Familiarise yourself with key regulations such as the EU AI Act, GDPR, and FCA guidance. Be prepared to discuss how these regulations impact AI development and compliance, and think of examples where you've applied similar principles in your previous roles.
✨Showcase Your Technical Expertise
Highlight your experience with Microsoft Purview and other relevant tools like Azure Key Vault and CI/CD pipelines. Be ready to explain how you've integrated security and compliance into AI systems, using specific projects or challenges you've faced as examples.
✨Emphasise Cross-Functional Collaboration
This role requires working closely with various teams, so be prepared to discuss your experience collaborating with engineering, legal, and risk teams. Share examples of how you’ve successfully navigated differing priorities and achieved common goals.
✨Prepare for Scenario-Based Questions
Expect questions that assess your problem-solving skills in real-world scenarios. Think about potential risks in AI systems and how you would address them, particularly in relation to data governance and compliance. Practising these scenarios can help you articulate your thought process clearly.