At a Glance
- Tasks: Lead a data engineering team and design scalable data architectures for impactful analytics.
- Company: Join a top UK tech and cloud services provider with a focus on innovation.
- Benefits: Flexible hybrid working, competitive salary, and opportunities for professional growth.
- Other info: Mentorship opportunities and a dynamic environment for career advancement.
- Why this job: Shape the future of data engineering and drive strategic decision-making across the organisation.
- Qualifications: Senior-level experience in data engineering and strong skills in Python, PySpark, and SQL.
The predicted salary is between 70000 - 90000 £ per year.
We are recruiting for an experienced Data Engineering Lead, working for one of the UK's leading technology and cloud services providers, as they continue to invest in a modern, scalable Microsoft Fabric data role. This role owns the end-to-end data engineering architecture - from ingestion and lakehouse design to enabling trusted, self-service analytics across the organisation. You will lead a small engineering team, define technical standards, and work closely with senior stakeholders to ensure data underpins strategic and operational decision-making.
To take ownership of the organisation's data platform architecture and delivery, ensuring reliable, scalable ingestion pipelines, a well-designed data warehouse, and high-quality analytics products.
- Data Platform & Architecture
- Own the data warehouse schema, lakehouse design, and core data models
- Translate business requirements into pragmatic, scalable data architecture decisions
- Define, implement, and maintain data contracts aligned to business domains
- Establish data quality, observability, and SLA frameworks
- Engineering Delivery
- Design and evolve scalable data pipelines (batch and incremental watermark-based ingestion) using:
- Microsoft Fabric Pipelines
- Azure Data Factory
- PySpark / Python Fabric Notebooks
Requirements:
- Senior-level experience in data engineering roles
- Experience leading or mentoring data engineers
- Hands-on experience with Microsoft Fabric, including: Workspaces, OneLake, Fabric Warehouses, Fabric Notebooks and Data Pipelines and Direct Lake semantic models
- Strong Python and PySpark development capability
- Solid SQL skills for warehouse development and metadata layers
- Proven ownership of production data systems end-to-end
- Wider Azure data platform experience (e.g. Experience in telecoms or cloud-focused environments)
- Experience integrating REST APIs, SFTP, and cloud-based data sources
Data Science Team Lead & Data Engineering Team Lead in Nottingham employer: ECS
Contact Detail:
ECS Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Science Team Lead & Data Engineering Team Lead in Nottingham
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, attend meetups, and engage with industry professionals on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving Microsoft Fabric, Azure Data Factory, and Python. This will give potential employers a taste of what you can do and set you apart from the competition.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering scenarios and challenges. Be ready to discuss how you've tackled issues in past roles, particularly around data pipelines and analytics. Practice makes perfect!
✨Tip Number 4
Don't forget to apply through our website! We’re always on the lookout for talented individuals like you. Keep an eye on our job listings and make sure your application stands out by tailoring it to the specific role you're after.
We think you need these skills to ace Data Science Team Lead & Data Engineering Team Lead in Nottingham
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineering Lead role. Highlight your experience with Microsoft Fabric, data architecture, and any leadership roles you've had. We want to see how your skills match what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our team. Be sure to mention specific projects or achievements that relate to the job description.
Showcase Your Technical Skills: Don’t forget to showcase your technical skills in your application. Mention your hands-on experience with Python, PySpark, and Azure Data Factory. We love seeing concrete examples of how you've used these tools in past roles!
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It’s super easy, and you’ll be able to keep track of your application status. Plus, we love seeing candidates who take the initiative!
How to prepare for a job interview at ECS
✨Know Your Data Inside Out
Make sure you’re well-versed in the specifics of data engineering and architecture. Brush up on Microsoft Fabric, Azure Data Factory, and your Python and PySpark skills. Being able to discuss your hands-on experience with these tools will show that you’re not just a theoretical expert but someone who can get things done.
✨Showcase Your Leadership Skills
As a Data Engineering Lead, you'll be expected to mentor and guide a team. Prepare examples of how you've successfully led teams in the past, handled conflicts, or driven projects to completion. Highlight your experience in conducting performance reviews and developing talent within your team.
✨Understand Business Needs
Be ready to discuss how you translate business requirements into scalable data architecture decisions. Think about specific instances where your data solutions have directly impacted strategic decision-making. This will demonstrate your ability to align technical work with business goals.
✨Prepare for Technical Questions
Expect to dive deep into technical discussions during your interview. Brush up on designing scalable data pipelines and establishing data quality frameworks. Practise explaining complex concepts clearly, as you may need to communicate these ideas to non-technical stakeholders.