At a Glance
- Tasks: Design and build data pipelines using Databricks for impactful data products.
- Company: Join a forward-thinking company with a focus on data and analytics.
- Benefits: Competitive daily rate, hybrid work model, and opportunities for professional growth.
- Why this job: Be at the forefront of data engineering and make a real difference in business decisions.
- Qualifications: 6+ years in data engineering, strong Python and SQL skills required.
- Other info: Collaborative team environment with exciting projects and career advancement potential.
The predicted salary is between 54000 - 84000 £ per year.
Location: UK (Hybrid, 2-3 days per week in-office)
Rate: £446/day (Inside IR35)
Contract Duration: 6 months
Additional Requirements: May require occasional travel to Dublin office
About the Role
We are looking for an experienced Senior Data Engineer to join a Data & Analytics (DnA) team. You will design, build, and operate production-grade data products across customer, commercial, financial, sales, and broader data domains. This role is hands-on and heavily focused on Databricks-based engineering, data quality, governance, and DevOps-aligned delivery.
You will work closely with the Data Engineering Manager, Product Owner, Data Product Manager, Data Scientists, Head of Data & Analytics, and IT teams to transform business requirements into governed, decision-grade datasets embedded in business processes and trusted for reporting, analytics, and advanced use cases.
Key Responsibilities
- Design, build, and maintain pipelines in Databricks using Delta Lake and Delta Live Tables.
- Implement medallion architectures (Bronze/Silver/Gold) and deliver reusable, discoverable data products.
- Ensure pipelines meet non-functional requirements such as freshness, latency, completeness, scalability, and cost-efficiency.
- Own and operate Databricks assets including jobs, notebooks, SQL, and Unity Catalog objects.
- Apply Git-based DevOps practices, CI/CD, and Databricks Asset Bundles to safely promote changes across environments.
- Implement monitoring, alerting, runbooks, incident response, and root-cause analysis.
- Enforce governance and security using Unity Catalog (lineage, classification, ACLs, row/column-level security).
- Define and maintain data-quality rules, expectations, and SLOs within pipelines.
- Support root-cause analysis of data anomalies and production issues.
- Partner with Product Owner, Product Manager, and business stakeholders to translate requirements into functional and non-functional delivery scope.
- Collaborate with IT platform teams to define data contracts, SLAs, and schema evolution strategies.
- Produce clear technical documentation (data contracts, source-to-target mappings, release notes).
Essential Skills & Experience:
- 6+ years in data engineering or advanced analytics engineering roles.
- Strong hands-on expertise in Python and SQL.
- Proven experience building production pipelines in Databricks.
- Excellent attention to detail, with the ability to create effective documentation and process diagrams.
- Solid understanding of data modelling, performance tuning, and cost optimisation.
Desirable Skills & Experience:
- Hands-on experience with Databricks Lakehouse, including Delta Lake and Delta Live Tables for batch/stream pipelines.
- Knowledge of pipeline health monitoring, SLA/SLO management, and incident response.
- Unity Catalog governance and security expertise, including lineage, table ACLs, and row/column-level security.
- Familiarity with Databricks DevOps/DataOps practices (Git-based development, CI/CD, automated testing).
- Performance and cost optimization strategies for Databricks (autoscaling, Photon/serverless, partitioning, Z-Ordering, OPTIMIZE/VACUUM).
- Semantic layer and metrics engineering experience for consistent business metrics and self-service analytics.
- Experience with cloud-native analytics platforms (preferably Azure) operating as enterprise-grade production services.
Senior Data Engineer employer: Stott & May Professional Search Limited
Contact Detail:
Stott & May Professional Search Limited Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Network Like a Pro
Get out there and connect with folks in the industry! Attend meetups, webinars, or even just grab a coffee with someone who’s already in the data engineering game. You never know when a casual chat could lead to your next big opportunity.
✨Show Off Your Skills
Don’t just tell them what you can do; show them! Create a portfolio of your projects, especially those involving Databricks and data pipelines. Share your GitHub or any relevant work during interviews to demonstrate your hands-on experience.
✨Ace the Interview
Prepare for technical interviews by brushing up on your Python and SQL skills. Be ready to discuss your past projects and how you tackled challenges in data engineering. Practice common interview questions and scenarios to boost your confidence.
✨Apply Through Our Website
We’ve got some fantastic opportunities waiting for you! Make sure to apply through our website to get the best chance at landing that Senior Data Engineer role. It’s the quickest way to get your application in front of the right people!
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Senior Data Engineer role. Highlight your experience with Databricks, Python, and SQL, and don’t forget to showcase any relevant projects that demonstrate your skills in building production pipelines.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re the perfect fit for our Data & Analytics team. Mention specific experiences that align with the job description and show us your passion for data engineering.
Showcase Your Technical Skills: In your application, be sure to highlight your hands-on expertise with tools like Delta Lake and your understanding of DevOps practices. We want to see how you’ve applied these skills in real-world scenarios, so don’t hold back!
Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of your application and ensures you don’t miss out on any important updates from us. Plus, we love seeing applications come directly from our site!
How to prepare for a job interview at Stott & May Professional Search Limited
✨Know Your Databricks Inside Out
Make sure you brush up on your Databricks skills, especially around Delta Lake and Delta Live Tables. Be ready to discuss how you've built production pipelines and any challenges you've faced. This is a hands-on role, so demonstrating your practical experience will really set you apart.
✨Showcase Your Problem-Solving Skills
Prepare to talk about specific instances where you've tackled data anomalies or production issues. Highlight your approach to root-cause analysis and how you implemented monitoring and alerting. Companies love candidates who can think critically and solve problems effectively.
✨Get Familiar with Governance and Security
Since this role involves enforcing governance and security using Unity Catalog, make sure you understand its features like lineage and ACLs. Be ready to explain how you've applied these in past projects, as it shows you're not just technically skilled but also aware of best practices.
✨Communicate Clearly and Document Well
Effective communication is key, especially when collaborating with various stakeholders. Prepare to discuss how you've produced clear technical documentation in the past. Being able to articulate complex ideas simply will demonstrate your ability to work well within a team.