At a Glance
- Tasks: Define evaluation frameworks and develop automated test pipelines for AI services.
- Company: Global organisation leading in AI enablement and platform engineering.
- Benefits: Competitive daily rate, hybrid work model, and exposure to cutting-edge AI technologies.
- Other info: Collaborative environment with opportunities for influence and career growth.
- Why this job: Join a transformative AI initiative and shape the future of enterprise AI capabilities.
- Qualifications: Strong Python skills, experience with AI systems, and automated testing expertise.
A global organisation is building a centralised AI enablement and platform engineering function focused on delivering scalable, secure, and governed AI capabilities across the enterprise. This role sits within a programme delivering enterprise-grade agentic AI infrastructure, including internal AI assistants, retrieval and search services, extensibility frameworks, and governance tooling.
The programme is focused on delivering a production-grade internal agentic AI platform, including:
- Development of an enterprise AI assistant capable of reasoning, planning, and tool orchestration
- Operation of enterprise retrieval, search, and grounding services for approved data sources
- Delivery of a secure internal gateway layer providing discovery, observability, policy enforcement, and lifecycle management for AI-integrated services
- Design and development of AI-integrated services and reusable capabilities that safely expose internal and third-party systems to AI agents
- Establishment of evaluation, governance, and quality-control frameworks to support scalable and compliant deployment of AI capabilities
The programme currently follows a centrally delivered model while evolving towards a federated contribution approach over time.
Key Responsibilities
- Define and implement evaluation frameworks covering correctness, safety, reliability, and regression impact for AI-integrated services
- Develop and maintain automated test pipelines for agentic workflows, including tool orchestration and multi-step execution paths
- Identify, evaluate, and mitigate AI system failure modes such as hallucinations, invalid inputs, latency issues, and inappropriate tool usage
- Produce testing and governance evidence required for internal approval and operational processes
- Collaborate closely with ML Engineers and platform teams to embed testability and evaluation capabilities into AI services
- Contribute to the long-term quality assurance and governance strategy for enterprise-wide AI platform adoption
Essential Skills and Experience
- Strong Python development experience, particularly for automation and test frameworks
- Experience with LLM and RAG evaluation tooling, frameworks, or custom evaluation pipelines
- Expertise in automated testing across unit, integration, and regression testing environments
- Good understanding of agentic AI systems, associated risks, and operational failure modes
- Ability to assess technical solutions against governance, audit, and security requirements
- Experience working within regulated or highly governed engineering environments
What’s on Offer
- Opportunity to contribute to large-scale enterprise AI transformation initiatives
- Exposure to cutting-edge AI platform engineering and governance challenges
- Collaborative environment working alongside platform engineers, ML specialists, and architecture teams
- Influence over the development of long-term AI quality and governance standards
- Opportunity to shape scalable AI engineering practices within a complex enterprise environment
Quality Engineer - Corporate Engineering AI in London employer: TEC PARTNERS LIMITED
Contact Detail:
TEC PARTNERS LIMITED Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Quality Engineer - Corporate Engineering AI in London
✨Tip Number 1
Network like a pro! Reach out to folks in the AI and engineering space on LinkedIn or at industry events. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Show off your skills! If you’ve got a portfolio of projects or contributions to open-source, make sure to highlight them. It’s a great way to demonstrate your Python prowess and understanding of AI systems.
✨Tip Number 3
Prepare for those interviews! Brush up on your knowledge of automated testing and AI governance. We recommend practising common interview questions and even doing mock interviews with friends.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace Quality Engineer - Corporate Engineering AI in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the Quality Engineer role. Highlight your Python development experience and any work with AI systems to grab our attention!
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about AI and how your background aligns with our goals. Be specific about your experience with automated testing and governance frameworks.
Showcase Relevant Projects: If you've worked on projects involving AI or automated testing, don’t hold back! Share details about your contributions and the impact they had. We love seeing real-world applications of your skills.
Apply Through Our Website: We encourage you to apply directly through our website for the best chance of getting noticed. It’s the easiest way for us to keep track of your application and get back to you quickly!
How to prepare for a job interview at TEC PARTNERS LIMITED
✨Know Your AI Stuff
Make sure you brush up on your knowledge of agentic AI systems and their associated risks. Be ready to discuss how you've tackled issues like hallucinations or latency in past projects. This will show that you understand the technical challenges and can contribute effectively.
✨Showcase Your Python Skills
Since strong Python development experience is essential, prepare to demonstrate your expertise. Bring examples of automation and test frameworks you've developed. If possible, be ready to solve a coding challenge during the interview to showcase your skills live.
✨Understand Evaluation Frameworks
Familiarise yourself with evaluation frameworks for AI-integrated services. Be prepared to discuss how you would define and implement these frameworks, focusing on correctness, safety, and reliability. This will highlight your strategic thinking and understanding of quality assurance.
✨Collaborative Mindset
This role involves working closely with ML Engineers and platform teams, so emphasise your collaborative skills. Share examples of how you've successfully worked in cross-functional teams before, and how you can contribute to embedding testability into AI services.