At a Glance
- Tasks: Design and optimise cloud-native data pipelines using Azure technologies.
- Company: Join Amey, a vibrant and inclusive tech community.
- Benefits: Competitive salary, flexible working, training opportunities, and generous holidays.
- Other info: Mentorship opportunities and a focus on career growth await you.
- Why this job: Lead innovative data projects and make a real impact in a fast-paced environment.
- Qualifications: Experience in data engineering, strong communication skills, and a collaborative mindset.
The predicted salary is between 60000 - 75000 ÂŁ per year.
We are excited to offer a fantastic permanent opportunity offering Hybrid working with travel up to once a week across one of Amey's offices in Birmingham, London and Liverpool.
HOURS OF WORK - Monday - Friday 37.5 Hours
Join our vibrant, inclusive community in Group IT, lead and scale our Azure-based data platform. You will own the end-to-end design and optimisation of high-performance, cloud-native data pipelines - ensuring secure, cost-efficient delivery of high-quality datasets across our medallion architecture.
Responsibilities
- Lead the design and implementation of distributed data processing pipelines (batch and streaming) using PySpark/Spark and Python on Azure Databricks.
- Architect and evolve our medallion data platform, ensuring data quality, observability and reliability across all layers (Bronze, Silver, Gold).
- Define and enforce strong data quality controls, validation rules and monitoring strategies to maintain trusted, production‑grade datasets.
- Make technical decisions on cluster strategy and cost optimisation (e.g., interactive vs. job clusters, autoscaling, serverless, pools, spot instances).
- Build and operationalise data workflows using Databricks Workflows, Azure Data Factory, or Fabric, ensuring efficient orchestration, scheduling and alerting.
- Drive API‑based data integrations, ensuring secure, scalable and low‑latency ingestion of data from third‑party and internal systems.
- Champion DevOps standards across the team (CI/CD, TDD, Infrastructure‑as‑Code, Git‑flow, Azure DevOps/GitHub).
- Embed and enforce security and governance best‑practices, including data masking, encryption and RBAC throughout the ETL lifecycle.
- Mentor junior engineers and contribute to technical standards and architecture roadmaps.
- Engage with data scientists, analysts and business stakeholders to deliver high‑value, production‑ready data products.
Qualifications
- Ability to drive technical direction and influence architectural decisions.
- Strong stakeholder management and communication skills.
- Mentoring mindset, collaborative and proactive.
- Operates with a product mindset and ownership mentality.
- Thrives in a fast‑moving environment with ambiguity and growth.
- Extensive experience working with large‑scale data engineering solutions in public cloud (Azure preferred).
- Deep hands‑on experience with Databricks, ADLS, Delta Lake and distributed compute paradigms.
- Prior ownership of mission‑critical production pipelines, including data quality monitoring, automated alerting and remediation.
- Strong track record of cost and performance optimisation at scale, including cluster tuning and workload refactoring.
- Experience with leading orchestration tools (e.g. Databricks Workflows, ADF, Airflow).
- Proven experience integrating APIs and external data services into data platforms.
- Proficiency in SQL, Python, PySpark and software engineering best practices.
- Experience in DevOps, CI/CD, infrastructure automation and data security.
- Exposure to MLOps, ML model deployment or LLM‑based pipelines is a plus.
Benefits
- Remuneration - Enjoy a competitive annual salary with the potential for yearly reviews to ensure you’re rewarded for your contributions.
- Career Growth - Shine in your career with advancement opportunities.
- Training Opportunities - Unlock your potential with comprehensive training, including fully funded leadership programs tailored to your personal growth.
- Holidays - Enjoy at least 24 days of holiday plus bank holidays, and the opportunity to buy further days.
- Pension - Generous pension scheme, with extra contributions from Amey.
- Flexible benefits - Customise your benefits with options such as insurance benefits, Cycle2Work scheme and access to discounted gym membership.
- Exclusive Discounts - Access our online portal filled with discounts from leading retailers, healthcare services, and more, helping you save on the things that matter.
- Give Back to community - Two Social Impact Days each year, for volunteering and fundraising opportunities.
- Family friendly policies for new parents or if you provide care for a dependant.
- Membership of our Affinity Networks who connect, support and inspire diverse communities within Amey.
Data lake Engineer in London employer: Amey Group Services LTD
Contact Detail:
Amey Group Services LTD Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data lake Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with current employees at Amey. A friendly chat can sometimes lead to opportunities that aren’t even advertised!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving Azure, Databricks, and data pipelines. This gives you a chance to demonstrate your expertise beyond just a CV.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and soft skills. Practice explaining complex concepts simply, as you'll need to communicate effectively with both techies and non-techies.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our vibrant community.
We think you need these skills to ace Data lake Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Lake Engineer role. Highlight your experience with Azure, Databricks, and data pipelines. We want to see how your skills match what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our team. Keep it engaging and relevant to the job description.
Showcase Your Projects: If you've worked on any relevant projects, make sure to mention them! Whether it's building data pipelines or optimising performance, we love to see real-world examples of your work.
Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of your application and ensures you don’t miss out on any important updates from us!
How to prepare for a job interview at Amey Group Services LTD
✨Know Your Tech Inside Out
Make sure you’re well-versed in Azure, Databricks, and the tools mentioned in the job description. Brush up on your PySpark and Python skills, and be ready to discuss how you've implemented data pipelines in the past. Real-world examples will show you know your stuff!
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in data engineering and how you overcame them. Think about times when you optimised performance or ensured data quality. This will demonstrate your ability to think critically and adapt in a fast-paced environment.
✨Engage with Stakeholders
Since strong stakeholder management is key, be ready to talk about how you've collaborated with data scientists, analysts, and other teams. Highlight your communication skills and how you’ve ensured everyone is on the same page during projects.
✨Emphasise Your Mentoring Experience
If you have experience mentoring junior engineers, make sure to mention it! Discuss how you’ve contributed to team growth and technical standards. This shows you’re not just a tech whiz but also a team player who values collaboration.