At a Glance
- Tasks: Design and maintain data pipelines using modern open-source technologies.
- Company: Join a fast-paced, innovative data engineering team in London.
- Benefits: Hybrid work model, competitive salary, and opportunities for professional growth.
- Why this job: Tackle complex data challenges and influence technical direction in a dynamic environment.
- Qualifications: Experience in SQL, Python, and building data ingestion pipelines.
- Other info: Ideal for those passionate about continuous learning and emerging data technologies.
The predicted salary is between 36000 - 60000 £ per year.
Role Overview
We are seeking an AI Agent Data Engineer to join the Technology, Data & Innovation division,
This role sits at the core of building next-generation AI processes and control agents deployed across multiple Agent Factories.
You will work closely with AI Acceleration Leads and cross-functional teams to drive projects from concept to production.
The position focuses on designing scalable, agent-ready data systems and enabling seamless integration between AI agents and enterprise environments.
Key Responsibilities
- Ensure organizational data is agent-ready by aligning data access, availability, and technical architecture.
- Design, build, and maintain robust data pipelines supporting real-time and batch processing for AI agents.
- Contribute to the technical implementation of agent systems using frameworks such as Google AI Development Kit (ADK) or comparable technologies.
- Develop and maintain client libraries for secure collaboration between data systems, agents, and internal platforms.
- Implement data-centric agent patterns and integrate agents with existing enterprise systems.
- Conduct continuous testing and validation of data integrations and pipelines throughout the development lifecycle.
- Incorporate feedback loops to ensure data quality, system reliability, and operational scalability.
Professional Experience
- 5–7 years of experience as a Software Engineer or Data Engineer.
- Minimum 2 years of hands-on experience in AI/ML or data-intensive application development .
- Demonstrated experience designing, building, and testing MCP-based or similar integration solutions.
- Proven track record delivering production-grade data pipelines and backend systems for complex applications.
Technical Skills
- Strong proficiency with Google AI Development Kit (ADK) or comparable agent frameworks.
- Advanced software and data engineering expertise, including:
- Data modeling
- API development
- Model Context Protocol (MCP) concepts
- Deep understanding of the agent development lifecycle and agent design patterns.
- Experience implementing testing strategies and iterative development practices.
Languages
- Professional proficiency in English (required).
- German language skills are a plus.
Data Engineer employer: Norton Blake
Contact Detail:
Norton Blake Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in the industry on LinkedIn or at meetups. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data projects, especially those using SQL and Python. It’s a great way to demonstrate your expertise beyond the application.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering challenges. Be ready to discuss how you’d tackle real-world problems, especially around ETL processes and data pipelines.
✨Tip Number 4
Don’t forget to apply through our website! We love seeing candidates who are genuinely interested in joining our team. Plus, it makes tracking your application a breeze!
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the Data Engineer role. Highlight your proficiency in SQL, Python, and any relevant open-source frameworks. We want to see how you can contribute to our team!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your background aligns with our needs. Don’t forget to mention your experience with ETL processes and data pipelines.
Showcase Your Projects: If you've worked on any cool data projects, make sure to include them! Whether it's building data ingestion pipelines or optimising performance, we love seeing real-world applications of your skills. It gives us a glimpse into your problem-solving abilities.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team at StudySmarter!
How to prepare for a job interview at Norton Blake
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, like SQL, Python, and open-source frameworks. Brush up on your knowledge of Apache Spark and Kafka, as these are likely to come up during technical discussions.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in previous roles and how you tackled them. Use the STAR method (Situation, Task, Action, Result) to structure your answers, especially when it comes to debugging or optimising data pipelines.
✨Communicate Clearly
Practice explaining complex technical concepts in simple terms. You might be asked to explain your work to non-technical stakeholders, so being able to communicate effectively is key. Think about how you would describe your projects to someone without a technical background.
✨Demonstrate Your Passion for Learning
Be ready to talk about any recent technologies or trends in data engineering that excite you. Showing that you’re proactive about staying current with industry developments can set you apart from other candidates.