At a Glance
- Tasks: Design and build production-grade data products using Databricks and Python.
- Company: Leading tech firm in London with a hybrid work culture.
- Benefits: Competitive salary, flexible working, and opportunities for professional growth.
- Why this job: Join a dynamic team to shape data solutions that drive business impact.
- Qualifications: 6+ years in data engineering, strong Python and SQL skills required.
- Other info: Collaborative environment with excellent career advancement potential.
The predicted salary is between 48000 - 72000 £ per year.
Work Location: London
Mode of Working: Hybrid
Office Attendance: 2-3 days per week
Other Working Conditions: May be required to travel to the Dublin office
Experience Overview
- 6+ years' experience in data engineering or advanced analytics engineering roles
- Strong hands-on expertise in Python and SQL
- Proven experience building production pipelines in Databricks
- Great attention to detail with the ability to create effective documentation and process diagrams for data assets
- Solid understanding of data modelling, performance tuning, and cost optimisation
The Role
We are seeking a Senior Data Engineer to design, build, and operate production-grade data products across customer, commercial, financial, sales, and data domains. The role is strongly focused on Databricks-based engineering, data quality, governance, and DevOps-aligned delivery.
The individual will report into the Data Engineering Manager in the DnA team and partner closely with the DnA Product Owner, Data Product Manager, Data Scientists, Head of Data and Analytics, and IT integration teams to convert business requirements into governed, decision-grade datasets trusted for reporting, analytics, and advanced use cases.
Your Responsibilities
- Design, build, and maintain pipelines in Databricks using Delta Lake / Delta Live Tables
- Implement medallion architectures (Bronze / Silver / Gold) and deliver reusable, discoverable data products
- Ensure pipelines meet non-functional requirements (freshness, latency, completeness, scalability, cost)
- Own and operate Databricks assets including Jobs/Workflows, notebooks, SQL, and Unity Catalog objects
- Use Git-based DevOps practices (branching, PR reviews, CI/CD) and Databricks Asset Bundles (DABs) to promote changes across dev/test/prod
- Implement monitoring, alerting, runbooks, incident response, and root cause analysis (RCA)
- Enforce governance and security using Unity Catalog (lineage, classification, ACLs, row/column-level security)
- Define and maintain data-quality rules, expectations, and SLOs within pipelines
- Support root cause analysis of data anomalies and production issues
- Partner with Product Owner, Product Manager, Data Engineering Manager, and business stakeholders to translate requirements into delivery-ready functional and non-functional scope
- Collaborate with IT platform teams to agree data contracts, SLAs, and schema evolution approaches
- Produce clear technical documentation (data contracts, source-to-target mappings, release notes)
Your Profile
Essential Skills / Knowledge / Experience
- 6+ years' experience in data engineering or advanced analytics engineering roles
- Strong hands-on expertise in Python and SQL
- Proven experience building production pipelines in Databricks
- Strong attention to detail with effective documentation and process diagramming skills
- Solid understanding of data modelling, performance tuning, and cost optimisation
Desirable Skills / Knowledge / Experience
- Strong hands-on Databricks Lakehouse experience, including Delta Lake and Delta Live Tables (DLT)
- Lakehouse monitoring, data quality, and observability
- Unity Catalog governance and security, including lineage, classification/tagging, table ACLs, and row/column-level security
- Databricks DevOps/DataOps including Git-based development, CI/CD, automated testing, and environment promotion using Databricks Asset Bundles (DABs)
- Performance and cost optimisation in Databricks, including cluster policies, autoscaling, Photon/serverless, and Delta table optimisation (partitioning, Z-Ordering, OPTIMIZE/VACUUM)
- Semantic layer and metrics engineering experience
- Cloud-native analytics platform experience (Azure preferred)
Senior Data Engineer employer: Stackstudio Digital Ltd.
Contact Detail:
Stackstudio Digital Ltd. Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Network Like a Pro
Get out there and connect with people in the industry! Attend meetups, webinars, or even just grab a coffee with someone who works in data engineering. Building relationships can lead to job opportunities that aren’t even advertised.
✨Show Off Your Skills
Don’t just tell them what you can do; show them! Create a portfolio of your projects, especially those involving Databricks and Python. Share your GitHub link when you apply through our website to give potential employers a taste of your work.
✨Ace the Interview
Prepare for technical interviews by brushing up on your SQL and Python skills. Practice common data engineering problems and be ready to discuss your past projects in detail. Remember, they want to see how you think and solve problems!
✨Follow Up
After your interview, don’t forget to send a thank-you email! It’s a great way to express your appreciation and reiterate your interest in the role. Plus, it keeps you fresh in their minds as they make their decision.
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Senior Data Engineer role. Highlight your experience with Python, SQL, and Databricks, and don’t forget to showcase any relevant projects or achievements that demonstrate your skills in building production pipelines.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how your background aligns with our needs. Be specific about your experience with data governance and quality, as these are key for us.
Showcase Your Attention to Detail: Since we value attention to detail, make sure your application is free from typos and errors. Include clear documentation of your past work, like process diagrams or data contracts, to demonstrate your ability to create effective documentation.
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It’s super easy, and you’ll be able to upload all your documents in one go. Plus, it helps us keep track of your application!
How to prepare for a job interview at Stackstudio Digital Ltd.
✨Know Your Tech Inside Out
Make sure you brush up on your Python and SQL skills, as well as your experience with Databricks. Be ready to discuss specific projects where you've built production pipelines and how you tackled challenges along the way.
✨Showcase Your Documentation Skills
Since attention to detail is key for this role, prepare examples of your documentation work. Bring along process diagrams or data contracts you've created, and be ready to explain how they contributed to project success.
✨Understand Data Governance
Familiarise yourself with data governance principles, especially around Unity Catalog. Be prepared to discuss how you've enforced data quality and security in past roles, and how you would approach these tasks in the new position.
✨Collaborate Like a Pro
This role involves working closely with various stakeholders. Think of examples where you've successfully collaborated with product owners, managers, or IT teams to deliver data solutions. Highlight your communication skills and how you translate technical requirements into actionable tasks.