At a Glance
- Tasks: Design and operate innovative data pipelines for cutting-edge AI systems.
- Company: Fast-growing, AI-native consulting business with a focus on modern data foundations.
- Benefits: Competitive salary, hybrid work model, and opportunities for professional growth.
- Why this job: Join a dynamic team and shape the future of AI with meaningful data engineering.
- Qualifications: 5-8 years of data engineering experience, strong Python and SQL skills.
- Other info: Collaborative environment with exposure to exciting AI projects.
The predicted salary is between 68000 - 85000 £ per year.
We are working with a fast-growing, AI-native consulting business that is building production-grade data foundations to power modern AI systems - including knowledge graphs, semantic layers, retrieval systems, and agentic workflows. This role sits at the heart of that capability. It is not a traditional reporting or BI data engineering position. The focus is on building high-quality, reliable, meaning-aware data pipelines that make AI systems usable, trustworthy, and scalable in real client environments.
What the role is really about:
You will be a senior data engineer responsible for designing and operating data pipelines that do not just move data, but structure it properly - modelling entities, relationships, and constraints so downstream AI systems can reason over it. You will work closely with ontology specialists, knowledge graph engineers, and AI engineers to ensure data is semantically aligned, observable, and production ready.
What you will be doing:
- Designing and owning end-to-end data pipelines using Python, SQL, and modern cloud platforms
- Structuring data around real-world concepts (entities, relationships, taxonomies), not just tables
- Building data foundations that support knowledge graphs, retrieval pipelines (RAG), embeddings, and agent workflows
- Implementing data quality, validation, lineage, and observability so pipelines are reliable and trusted
- Collaborating in a consulting-style delivery model, working directly with engineers and client stakeholders
- Creating reusable patterns, documentation, and standards that others can build on
What we are looking for:
- Strong senior-level data engineering experience (typically 5-8 years)
- Excellent Python and SQL skills, with hands-on pipeline ownership
- Experience working with cloud data platforms (AWS, Azure, or GCP)
- Exposure to semantic modelling, knowledge graphs, or ontology-aligned data (depth can vary, mindset matters most)
- Comfortable working in ambiguous, client-facing environments with high ownership
Value Engineer - Data Engineering employer: Develop
Contact Detail:
Develop Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Value Engineer - Data Engineering
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines, projects, or any relevant work. This is your chance to demonstrate your expertise in Python, SQL, and cloud platforms, making you stand out from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on semantic modelling and knowledge graphs. Be ready to discuss how you've structured data around real-world concepts and how you ensure data quality and reliability in your projects.
✨Tip Number 4
Don’t forget to apply through our website! We’re always on the lookout for talented individuals like you. Plus, it’s a great way to get noticed by our hiring team and show your enthusiasm for joining us.
We think you need these skills to ace Value Engineer - Data Engineering
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Value Engineer role. Highlight your data engineering experience, especially with Python, SQL, and cloud platforms, to show us you’re the right fit.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about data engineering and how your background makes you a great candidate. Don’t just repeat your CV; give us insights into your thought process and problem-solving skills.
Showcase Relevant Projects: If you've worked on projects involving knowledge graphs or semantic modelling, make sure to mention them! We love seeing real-world applications of your skills, so share any relevant experiences that demonstrate your expertise.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates from our team!
How to prepare for a job interview at Develop
✨Know Your Data Inside Out
Make sure you’re well-versed in data engineering concepts, especially around building data pipelines. Brush up on your Python and SQL skills, and be ready to discuss how you've structured data in previous roles. This will show that you can handle the responsibilities of the Value Engineer position.
✨Understand Semantic Modelling
Since this role involves working with knowledge graphs and semantic layers, take some time to understand these concepts. Be prepared to explain how you’ve applied them in past projects or how you would approach them in this new role. It’ll demonstrate your readiness to collaborate with ontology specialists and AI engineers.
✨Showcase Your Problem-Solving Skills
Expect questions about how you’ve tackled challenges in ambiguous environments. Think of specific examples where you’ve had to design reliable data pipelines under pressure. This will highlight your ability to thrive in client-facing situations and take ownership of your work.
✨Prepare for Collaborative Scenarios
This role requires a lot of teamwork, so be ready to discuss how you’ve worked with cross-functional teams in the past. Share examples of how you’ve collaborated with engineers and stakeholders to deliver successful projects. This will show that you can fit into their consulting-style delivery model.