Data Engineer - Azure Lakehouse & PySpark Pipelines
Data Engineer - Azure Lakehouse & PySpark Pipelines

Data Engineer - Azure Lakehouse & PySpark Pipelines

Temporary 36000 - 60000 £ / year (est.) Home office (partial)
I

At a Glance

  • Tasks: Create data-driven insights and improve analytics using Azure Lakehouse and PySpark.
  • Company: Leading UK law firm committed to diversity and inclusion.
  • Benefits: Generous holidays, flexible work options, and a supportive work environment.
  • Why this job: Make a real impact by designing and maintaining data pipelines in a dynamic setting.
  • Qualifications: Experience with Python/PySpark, Azure Data Factory, and data integration techniques.
  • Other info: Collaborative team environment with opportunities for professional growth.

The predicted salary is between 36000 - 60000 £ per year.

A leading law firm in the UK is seeking a Data Engineer on a 12-month fixed term contract. The role is crucial in creating data-driven insights and improving analytics across the organization.

Candidates should have proven experience with:

  • Python/PySpark
  • Azure Data Factory
  • Data integration techniques

The ideal applicant will design, develop, and maintain data pipelines while collaborating across teams.

The position offers generous holidays, flexible work options, and a commitment to diversity and inclusion.

Data Engineer - Azure Lakehouse & PySpark Pipelines employer: Irwin Mitchell

As a leading law firm in the UK, we pride ourselves on fostering a collaborative and inclusive work culture that empowers our employees to thrive. With generous holiday allowances, flexible working options, and a strong commitment to diversity, we provide an environment where Data Engineers can grow their skills and contribute to meaningful projects that drive data-driven insights across the organisation.
I

Contact Detail:

Irwin Mitchell Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer - Azure Lakehouse & PySpark Pipelines

✨Tip Number 1

Network like a pro! Reach out to folks in the industry, especially those already working at the law firm or similar companies. A friendly chat can open doors and give you insider info that could make your application stand out.

✨Tip Number 2

Show off your skills! If you’ve got a portfolio of projects using Python, PySpark, or Azure Data Factory, make sure to highlight them. We love seeing real-world applications of your expertise, so don’t hold back!

✨Tip Number 3

Prepare for the interview by brushing up on your technical knowledge and soft skills. Be ready to discuss how you’ve tackled challenges in data integration and collaboration. We want to see how you think and work with others!

✨Tip Number 4

Apply through our website! It’s the best way to ensure your application gets seen. Plus, it shows you’re genuinely interested in the role and the company. Let’s get you that Data Engineer position!

We think you need these skills to ace Data Engineer - Azure Lakehouse & PySpark Pipelines

Python
PySpark
Azure Data Factory
Data Integration Techniques
Data Pipeline Development
Collaboration Skills
Data Analytics
Problem-Solving Skills

Some tips for your application 🫡

Tailor Your CV: Make sure your CV highlights your experience with Python/PySpark and Azure Data Factory. We want to see how your skills align with the role, so don’t be shy about showcasing relevant projects!

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how you can contribute to our team. Keep it concise but impactful – we love a good story!

Showcase Collaboration Skills: Since this role involves working across teams, mention any past experiences where you collaborated successfully. We value teamwork, so let us know how you’ve made a difference in previous roles!

Apply Through Our Website: We encourage you to apply directly through our website for a smoother process. It helps us keep track of applications and ensures you don’t miss out on any important updates from us!

How to prepare for a job interview at Irwin Mitchell

✨Know Your Tech Inside Out

Make sure you brush up on your Python and PySpark skills before the interview. Be ready to discuss specific projects where you've used these technologies, as well as any challenges you faced and how you overcame them.

✨Showcase Your Data Pipeline Experience

Prepare to talk about your experience with Azure Data Factory and data integration techniques. Have examples ready that demonstrate how you've designed, developed, and maintained data pipelines in previous roles.

✨Collaboration is Key

Since this role involves working across teams, think of examples that highlight your teamwork skills. Be ready to discuss how you've collaborated with others to achieve common goals, especially in a data-driven context.

✨Embrace Diversity and Inclusion

Familiarise yourself with the firm's commitment to diversity and inclusion. Be prepared to share your thoughts on how diverse perspectives can enhance data analytics and decision-making processes.

Data Engineer - Azure Lakehouse & PySpark Pipelines
Irwin Mitchell

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

I
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>