At a Glance
- Tasks: Design and build data products using Databricks, ensuring quality and governance.
- Company: Join a forward-thinking company focused on data innovation.
- Benefits: Attractive salary, flexible working options, and opportunities for professional growth.
- Why this job: Make an impact by transforming data into actionable insights for businesses.
- Qualifications: 6+ years in data engineering, strong Python and SQL skills required.
- Other info: Collaborative team environment with a focus on cutting-edge technology.
The predicted salary is between 36000 - 60000 £ per year.
We are seeking a Senior Data Engineer to design, build, and operate production-grade data products across customer, commercial, financial, sales, and data domains. The role is strongly focused on Databricks-based engineering, data quality, governance, and DevOps-aligned delivery.
The individual will report into the Data Engineering Manager in the DnA team, partner closely with DnA Product Owner, Data Product Manager, Data Scientists, Head of Data and Analytics, and IT integration teams to convert business requirements into governed, decision-grade datasets that are embedded into business processes and trusted for reporting, analytics, and advanced use cases.
Your responsibilities:
- Data Product Pipeline Engineering
- Design, build, and maintain pipelines in Databricks using Delta Lake/Delta Live Tables.
- Implement medallion architectures (Bronze/Silver/Gold) and deliver reusable, discoverable data products.
- Ensure pipelines meet non-functional requirements (freshness, latency, completeness, scalability, cost).
- Own and operate Databricks assets including Jobs/Workflows, notebooks, SQL, and Unity Catalog objects.
- Use Git-based DevOps practices (branching, PR reviews, CI/CD) and Databricks Asset Bundles (DABs) to promote changes across dev/test/prod safely.
- Implement monitoring, alerting, runbooks, incident response, and RCA.
- Enforce governance and security using Unity Catalog (lineage, classification, ACLs, row/column-level security).
- Define and maintain data-quality rules, expectations, and SLOs within pipelines.
- Support root cause analysis of data anomalies and production issues.
- Partner with Product Owner, Product Manager, Data Engineering Manager and business stakeholders to translate requirements into delivery-ready functional and non-functional scope.
- Collaborate with IT platform teams to agree data contracts, SLAs, and schema evolution approaches.
- Produce clear technical documentation (data contracts, source-to-target mappings, release notes).
Essential skills/knowledge/experience:
- 6+ years' experience in data engineering or advanced analytics engineering roles.
- Strong hands-on expertise in Python and SQL.
- Proven experience building production pipelines in Databricks.
- Great attention to detail and with the ability to create effective documentation and process diagrams for your data assets.
- Solid understanding of data modelling, performance tuning, and cost optimisation.
Desirable skills/knowledge/experience:
- Strong hands-on Databricks Lakehouse experience, including Delta Lake and Delta Live Tables (DLT) for building and operating batch and streaming pipelines using medallion architectures.
- Lakehouse monitoring, data quality, and observability, including:
- DLT expectations and pipeline health monitoring
- Job/workflow metrics, alerting, and SLA/SLO management
- Incident response, runbooks, and root cause analysis
- Unity Catalog governance and security, including lineage, classification/tagging, table ACLs, and row/column-level security to support regulated and trusted reporting.
- Databricks DevOps/DataOps, including Git-based development, CI/CD, automated testing, and environment promotion across dev/test/prod using Databricks Asset Bundles (DABs).
- Performance and cost optimisation in Databricks, include cluster policies, autoscaling, Photon/serverless, and Delta table optimisation (partitioning, Z-Ordering, OPTIMIZE/VACUUM).
- Semantic layer and metrics engineering experience, including:
- Designing and maintaining metrics views/semantic models on top of Lakehouse data
- Defining consistent business metrics and logic for reuse across dashboards, analytics, and reporting
- Supporting self-service analytics while preserving governance and trust
Senior Data Engineer in England employer: Avance Consulting
Contact Detail:
Avance Consulting Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer in England
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. A friendly chat can lead to opportunities that aren’t even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data engineering projects, especially those using Databricks. This gives potential employers a taste of what you can do.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your past projects and how you tackled challenges.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive!
We think you need these skills to ace Senior Data Engineer in England
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Senior Data Engineer role. Highlight your experience with Databricks, Python, and SQL, and don’t forget to showcase any relevant projects that demonstrate your skills in building production pipelines.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how your background aligns with our needs. Mention specific experiences that relate to the responsibilities outlined in the job description.
Showcase Your Documentation Skills: Since clear technical documentation is key for this role, consider including examples of your documentation work or mentioning how you’ve effectively communicated complex data concepts in the past. This will show us you understand the importance of clarity in data engineering.
Apply Through Our Website: We encourage you to apply through our website for a smoother application process. It helps us keep track of your application and ensures you don’t miss out on any important updates from us!
How to prepare for a job interview at Avance Consulting
✨Know Your Databricks Inside Out
Make sure you brush up on your Databricks skills, especially around Delta Lake and Delta Live Tables. Be ready to discuss how you've built production pipelines and the medallion architecture you've implemented in past projects.
✨Showcase Your Python and SQL Expertise
Prepare to demonstrate your hands-on experience with Python and SQL. Have examples ready that highlight your ability to write efficient queries and scripts, as well as any performance tuning you've done in previous roles.
✨Understand Data Governance and Quality
Familiarise yourself with data governance principles, especially using Unity Catalog. Be prepared to talk about how you've enforced data quality rules and handled data anomalies in your past work.
✨Collaborate Like a Pro
Since this role involves working closely with various stakeholders, think of examples where you've successfully collaborated with product owners, managers, and IT teams. Highlight your communication skills and how you translate technical requirements into actionable tasks.