Data Engineer (Databricks) in London

Data Engineer (Databricks) in London

London Full-Time 36000 - 60000 £ / year (est.) No home office possible
Go Premium
H

At a Glance

  • Tasks: Design and optimise data pipelines using Databricks for high-performance data solutions.
  • Company: Join a forward-thinking tech company focused on data innovation.
  • Benefits: Competitive salary, flexible working options, and opportunities for professional growth.
  • Why this job: Be at the forefront of data engineering and make a real impact in cloud environments.
  • Qualifications: Experience with Databricks, PySpark, and Azure cloud services required.
  • Other info: Fast-paced environment with excellent career advancement opportunities.

The predicted salary is between 36000 - 60000 £ per year.

We are seeking a Databricks Data Engineer with strong expertise in designing and optimising large-scale data engineering solutions within the Databricks Data Intelligence Platform. This role is ideal for someone passionate about building high-performance data pipelines and ensuring robust data governance across modern cloud environments.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data.
  • Optimise Spark and Delta Lake performance through efficient cluster configuration, adaptive query execution, and caching strategies.
  • Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads.
  • Implement data quality, lineage tracking, and access control policies aligned with Databricks Unity Catalogue and governance best practices.
  • Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles.
  • Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel for versioned data management.
  • Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key Vault, and Azure Functions.

What We’re Looking For

  • Proven experience with Databricks, PySpark, and Delta Lake.
  • Strong understanding of workflow orchestration, performance optimisation, and data governance.
  • Hands-on experience with Azure cloud services.
  • Ability to work in a fast-paced environment and deliver high-quality solutions.
  • SC Cleared candidates.

If you are interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn’t quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career.

Data Engineer (Databricks) in London employer: hays-gcj-v4-pd-online

Join a forward-thinking company that values innovation and collaboration, offering a dynamic work culture where your contributions as a Data Engineer will directly impact our data-driven initiatives. With a strong focus on employee growth, we provide ample opportunities for professional development and training in cutting-edge technologies like Databricks and Azure. Located in a vibrant area, our workplace fosters creativity and teamwork, making it an excellent environment for those seeking meaningful and rewarding employment.
H

Contact Detail:

hays-gcj-v4-pd-online Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer (Databricks) in London

✨Tip Number 1

Network like a pro! Reach out to folks in the industry, attend meetups, and connect with other data engineers. You never know who might have the inside scoop on job openings or can refer you directly.

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your Databricks projects, PySpark applications, and any cool data pipelines you've built. This will give potential employers a taste of what you can do.

✨Tip Number 3

Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your thought process when designing scalable data solutions, as this will demonstrate your expertise.

✨Tip Number 4

Don't forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love hearing from passionate candidates who are eager to join our team.

We think you need these skills to ace Data Engineer (Databricks) in London

Databricks
PySpark
Delta Lake
Data Pipeline Design
Workflow Orchestration
Performance Optimisation
Data Governance
Azure Cloud Services
ETL Development
Data Transformation
Cluster Configuration
Adaptive Query Execution
Caching Strategies
Data Quality Implementation
Access Control Policies

Some tips for your application 🫡

Tailor Your CV: Make sure your CV highlights your experience with Databricks, PySpark, and Delta Lake. We want to see how your skills match the role, so don’t be shy about showcasing relevant projects or achievements!

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how you can contribute to our team. Keep it concise but engaging – we love a good story!

Showcase Your Technical Skills: When applying, be specific about your technical expertise. Mention any experience with Azure services and performance optimisation techniques. We’re looking for someone who can hit the ground running, so let us know what you bring to the table!

Apply Through Our Website: We encourage you to apply directly through our website. It’s the easiest way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team!

How to prepare for a job interview at hays-gcj-v4-pd-online

✨Know Your Databricks Inside Out

Make sure you brush up on your Databricks knowledge before the interview. Be ready to discuss how you've designed and optimised data pipelines using Databricks Notebooks, Jobs, and Workflows. Having specific examples of your past projects will show your expertise and passion for the platform.

✨Show Off Your PySpark Skills

Prepare to talk about your experience with PySpark applications. Think of a couple of scenarios where you've implemented ETL processes or data transformations. Highlight any modular design principles you've followed, as this will demonstrate your ability to create reusable solutions.

✨Performance Optimisation is Key

Be ready to discuss performance optimisation techniques you've used in the past. This could include cluster configuration, adaptive query execution, or caching strategies. Sharing specific instances where you've conducted performance testing or cluster tuning will impress your interviewers.

✨Understand Data Governance Best Practices

Familiarise yourself with data governance concepts, especially those related to Databricks Unity Catalogue. Be prepared to explain how you've implemented data quality, lineage tracking, and access control policies in your previous roles. This shows that you not only know how to build data pipelines but also how to manage them responsibly.

Data Engineer (Databricks) in London
hays-gcj-v4-pd-online
Location: London
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

H
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>