Data Engineer in Glasgow

Data Engineer in Glasgow

Glasgow Full-Time 50000 £ / year No home office possible
H

At a Glance

  • Tasks: Engineer scalable ELT pipelines and build a modern data platform from scratch.
  • Company: Leading UK consumer brand undergoing major data transformation.
  • Benefits: Competitive salary, flexible working, and opportunities for professional growth.
  • Why this job: Join early in a transformative project and influence architecture decisions.
  • Qualifications: Experience in Data Engineering with Azure and Databricks, strong PySpark skills.
  • Other info: Collaborative environment with a focus on innovation and best practices.

Our incredibly successful client, a consumer brand, is undertaking a major data modernisation programme, moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse. They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines, Unity Catalog, and Azure Data Factory, and this role sits right at the heart of that transformation. This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care.

If you want to build a best-in-class Lakehouse from scratch, this is the one.

What You'll Be Doing
  • Lakehouse Engineering (Azure + Databricks)
  • Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL across a full Medallion Architecture (Bronze - Silver - Gold).
  • Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks.
  • Apply Lakeflow expectations for data quality, schema validation and operational reliability.
  • Curated Data Layers & Modelling
    • Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations).
    • Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets.
    • Apply governance, lineage and fine-grained permissions via Unity Catalog.
  • Orchestration & Observability
    • Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory.
    • Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform.
  • DevOps & Platform Engineering
    • Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts.
    • Ensure secure, enterprise-grade platform operation across Dev Prod, using private endpoints, managed identities and Key Vault.
    • Contribute to platform standards, design patterns, code reviews and future roadmap.
  • Collaboration & Delivery
    • Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation.
    • Influence architecture decisions and uplift engineering maturity within a growing data function.
    Tech Stack You'll Work With
    • Databricks: Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses
    • Azure: ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints
    • Languages: PySpark, Spark SQL, Python, Git
    • DevOps: Azure DevOps Repos, Pipelines, CI/CD
    • Analytics: Power BI, Fabric
    What We're Looking For
    • Experience
    • Significant commercial experience of Data Engineering with years delivering production workloads on Azure + Databricks.
    • Strong PySpark/Spark SQL and distributed data processing expertise.
    • Proven Medallion/Lakehouse delivery experience using Delta Lake.
    • Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies.
    • Operational experience - SLAs, observability, idempotent pipelines, reprocessing, backfills.
  • Mindset
    • Strong grounding in secure Azure Landing Zone patterns.
    • Comfort with Git, CI/CD, automated deployments and modern engineering standards.
    • Clear communicator who can translate technical decisions into business outcomes.
  • Nice to Have
    • Databricks Certified Data Engineer Associate
    • Streaming ingestion experience (Auto Loader, structured streaming, watermarking)
    • Subscription/entitlement modelling experience
    • Advanced Unity Catalog security (RLS, ABAC, PII governance)
    • Terraform/Bicep for IaC
    • Fabric Semantic Model / Direct Lake optimisation

    Data Engineer in Glasgow employer: Head Resourcing

    Join a leading UK consumer business in Glasgow as a Senior Data Engineer, where you'll be at the forefront of a transformative data modernisation programme. With a strong focus on employee growth and collaboration, this role offers the unique opportunity to influence architecture and engineering standards while working with cutting-edge technologies like Azure and Databricks. Enjoy a supportive work culture that values innovation and provides ample opportunities for professional development in a dynamic environment.
    H

    Contact Detail:

    Head Resourcing Recruiting Team

    StudySmarter Expert Advice 🤫

    We think this is how you could land Data Engineer in Glasgow

    ✨Tip Number 1

    Network like a pro! Get out there and connect with folks in the industry. Attend meetups, webinars, or even just grab a coffee with someone who’s already in the data engineering game. You never know who might have the inside scoop on job openings!

    ✨Tip Number 2

    Show off your skills! Create a portfolio showcasing your projects, especially those involving Azure and Databricks. This is your chance to demonstrate your expertise in building scalable ELT pipelines and working with Medallion Architecture. A strong portfolio can really set you apart!

    ✨Tip Number 3

    Prepare for interviews by brushing up on common data engineering scenarios. Be ready to discuss your experience with PySpark, Spark SQL, and how you've tackled challenges in previous roles. Practising your responses will help you feel more confident when it’s time to shine!

    ✨Tip Number 4

    Don’t forget to apply through our website! We’ve got some fantastic opportunities waiting for you, and applying directly can sometimes give you an edge. Plus, it’s super easy to keep track of your applications that way!

    We think you need these skills to ace Data Engineer in Glasgow

    Azure Data Factory
    Databricks Lakehouse
    Lakeflow Declarative Pipelines
    PySpark
    Spark SQL
    Medallion Architecture
    Data Modelling
    Star Schemas
    Governance and Lineage
    CI/CD Pipelines
    Azure DevOps
    Dimensional Modelling
    Observability
    Secure Azure Landing Zone Patterns
    Clear Communication

    Some tips for your application 🫡

    Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Azure, Databricks, and any relevant projects you've worked on. We want to see how your skills align with our needs!

    Showcase Your Projects: Include specific examples of your work with ELT pipelines, data modelling, and orchestration. We love seeing real-world applications of your skills, so don’t hold back on the details!

    Be Clear and Concise: When writing your application, keep it clear and to the point. Use straightforward language to explain your experience and how it relates to the role. We appreciate clarity just as much as you do!

    Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity. We can’t wait to hear from you!

    How to prepare for a job interview at Head Resourcing

    ✨Know Your Tech Stack

    Make sure you’re well-versed in the specific technologies mentioned in the job description, like Azure, Databricks, and PySpark. Brush up on your knowledge of Lakeflow Declarative Pipelines and Medallion Architecture, as these will likely come up during the interview.

    ✨Showcase Your Experience

    Prepare to discuss your previous projects that align with the role. Highlight your experience with building scalable ELT pipelines and any operational experience you have with SLAs and observability. Use specific examples to demonstrate how you've tackled challenges in past roles.

    ✨Communicate Clearly

    Practice explaining complex technical concepts in simple terms. The interviewer will want to see if you can translate your technical decisions into business outcomes, so be ready to articulate how your work impacts the broader organisation.

    ✨Ask Insightful Questions

    Prepare thoughtful questions about the company’s data modernisation programme and how you can contribute to it. Inquire about their current challenges with legacy systems and what success looks like for this role. This shows your genuine interest and helps you assess if the company is the right fit for you.

    Data Engineer in Glasgow
    Head Resourcing
    Location: Glasgow

    Land your dream job quicker with Premium

    You’re marked as a top applicant with our partner companies
    Individual CV and cover letter feedback including tailoring to specific job roles
    Be among the first applications for new jobs with our AI application
    1:1 support and career advice from our career coaches
    Go Premium

    Money-back if you don't land a job in 6-months

    H
    Similar positions in other companies
    UK’s top job board for Gen Z
    discover-jobs-cta
    Discover now
    >