At a Glance
- Tasks: Lead the transformation to a modern data platform using Azure and Databricks.
- Company: Join a rapidly scaling UK consumer brand with a focus on innovation.
- Benefits: Competitive salary, flexible working, and opportunities for professional growth.
- Why this job: Be at the forefront of data engineering and influence architecture from the ground up.
- Qualifications: 5-8 years in Data Engineering with strong Azure and Databricks experience.
- Other info: Dynamic team environment with excellent career advancement potential.
The predicted salary is between 48000 - 72000 Β£ per year.
A rapidly scaling UK consumer brand is undertaking a major data modernisation programme, moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse. They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines, Unity Catalog, and Azure Data Factory, and this role sits right at the heart of that transformation. This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care.
What You'll Be Doing
- Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL across a full Medallion Architecture (Bronze β Silver β Gold).
- Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven framework.
- Apply Lakeflow expectations for data quality, schema validation and operational reliability.
- Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations).
- Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets.
- Apply governance, lineage and fine-granular security via Unity Catalog.
- Design and optimise orchestration using Lakeflow Workflows and Azure Databricks Data Factory.
- Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform.
- Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts.
- Ensure secure, enterprise-grade platform operation across Dev and Prod, using private endpoints, managed identities and Key Vault.
- Contribute to platform standards, design patterns, code reviews and future roadmap.
- Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation.
- Influence architecture decisions and uplift engineering maturity within a growing data function.
Tech Stack You'll Work With
- Databricks: Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses
- Azure: ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints
- Languages: PySpark, Spark SQL, Python, Git
- DevOps: Azure DevOps Repos, Pipelines, CI/CD
- Analytics: Power BI, Fabric
What We're Looking For
- 5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks.
- Strong PySpark/Spark SQL and distributed data processing expertise.
- Proven Medallion/Lakehouse delivery experience using Delta Lake.
- Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies.
- Operational experience β SLAs, observability, idempotent pipelines, reprocessing, backfills.
- Strong grounding in secure Azure Landing Zone patterns.
- Comfort with Git, CI/CD, automated deployments and modern engineering standards.
- Clear communicator who can translate technical decisions into business outcomes.
Nice to Have
- Databricks Certified Data Engineer Associate
- Streaming ingestion experience (Auto Loader, structured streaming, watermarking)
- Subscription/entitlement modelling experience
- Advanced Unity Catalog security (RLS, ABAC, PII governance)
- Terraform/Bicep for IaC
- Fabric Semantic Model / Direct Lake optimisation
Senior Data Engineer/ PowerBI in Glasgow employer: Head Resourcing Ltd
Contact Detail:
Head Resourcing Ltd Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Senior Data Engineer/ PowerBI in Glasgow
β¨Tip Number 1
Network like a pro! Reach out to people in your industry on LinkedIn or at local meetups. A friendly chat can lead to opportunities that arenβt even advertised yet.
β¨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects, especially those related to Azure and Databricks. This gives potential employers a taste of what you can do.
β¨Tip Number 3
Prepare for interviews by practising common questions and scenarios specific to data engineering. Think about how youβd tackle challenges in building scalable ELT pipelines or optimising orchestration.
β¨Tip Number 4
Donβt forget to apply through our website! Weβre always looking for talented individuals like you to join our team and help us build amazing data solutions.
We think you need these skills to ace Senior Data Engineer/ PowerBI in Glasgow
Some tips for your application π«‘
Tailor Your CV: Make sure your CV is tailored to the role of Senior Data Engineer. Highlight your experience with Azure, Databricks, and any relevant projects that showcase your skills in building scalable ELT pipelines and data modelling.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your background aligns with our mission at StudySmarter. Donβt forget to mention specific technologies youβve worked with that relate to the job description.
Showcase Your Projects: If you've worked on any notable projects, especially those involving Lakehouse architecture or Power BI, make sure to include them. We love seeing real-world applications of your skills, so donβt hold back!
Apply Through Our Website: We encourage you to apply directly through our website. Itβs the best way for us to receive your application and ensures youβre considered for this exciting opportunity to join our team at StudySmarter!
How to prepare for a job interview at Head Resourcing Ltd
β¨Know Your Tech Stack Inside Out
Make sure youβre well-versed in Azure, Databricks, and the specific tools mentioned in the job description. Brush up on your PySpark and Spark SQL skills, and be ready to discuss how you've used them in past projects. This will show that you can hit the ground running.
β¨Prepare for Scenario-Based Questions
Expect questions that ask you to solve real-world problems related to data engineering. Think about how you would design scalable ELT pipelines or implement monitoring and alerting. Practising these scenarios will help you articulate your thought process clearly during the interview.
β¨Showcase Your Collaboration Skills
This role involves working closely with BI/Analytics teams, so be prepared to discuss how youβve collaborated in the past. Share examples of how youβve influenced architecture decisions or improved engineering standards in previous roles. This will highlight your ability to work within a team.
β¨Ask Insightful Questions
At the end of the interview, donβt forget to ask questions! Inquire about the companyβs data modernisation goals or how they measure success in their data engineering team. This shows your genuine interest in the role and helps you assess if itβs the right fit for you.