Data Engineer: PySpark ETL & Lakehouse Pipelines (6m) in Dudley
Data Engineer: PySpark ETL & Lakehouse Pipelines (6m)

Data Engineer: PySpark ETL & Lakehouse Pipelines (6m) in Dudley

Dudley Full-Time No home office possible
T

At a Glance

  • Tasks: Build and optimise ETL/ELT pipelines using PySpark for exciting projects.
  • Company: Join a leading tech consultancy making waves in the UK.
  • Benefits: Competitive day rate, flexible onsite work, and opportunities to enhance your skills.
  • Why this job: Be part of innovative data solutions that drive real business impact.
  • Qualifications: Experience with PySpark and SQL; manufacturing background is a plus.
  • Other info: Work in a dynamic environment with great potential for career advancement.

A leading tech consultancy in the UK is seeking a skilled Data Engineer to support a Material Spend Project. The role involves developing ETL/ELT pipelines with PySpark, working with cloud data platforms, and ensuring data quality through SQL.

Candidates should have experience in manufacturing environments as an added advantage. The position requires onsite presence in Dudley for 2 to 3 days per month, offering a day rate of £400 to £440 outside IR35.

Data Engineer: PySpark ETL & Lakehouse Pipelines (6m) in Dudley employer: TXP Technology x People

Join a leading tech consultancy in the UK that values innovation and collaboration, offering a dynamic work culture where your contributions directly impact exciting projects like the Material Spend Project. With competitive day rates and flexible onsite requirements in Dudley, this role provides excellent opportunities for professional growth and development in a supportive environment that encourages skill enhancement and career progression.
T

Contact Detail:

TXP Technology x People Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer: PySpark ETL & Lakehouse Pipelines (6m) in Dudley

✨Tip Number 1

Network like a pro! Reach out to your connections in the tech consultancy space, especially those who work with data engineering. A friendly chat can lead to insider info about job openings or even a referral.

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your PySpark ETL projects and any cloud data platforms you've worked with. This will give potential employers a clear view of what you can bring to the table.

✨Tip Number 3

Prepare for interviews by brushing up on SQL and data quality concepts. Be ready to discuss how you've tackled challenges in manufacturing environments, as this experience is a bonus for the role.

✨Tip Number 4

Don't forget to apply through our website! We make it easy for you to find roles that match your skills and interests. Plus, it shows you're serious about joining our team!

We think you need these skills to ace Data Engineer: PySpark ETL & Lakehouse Pipelines (6m) in Dudley

PySpark
ETL/ELT Pipelines
Cloud Data Platforms
SQL
Data Quality Assurance
Manufacturing Environment Experience
Onsite Collaboration
Data Engineering

Some tips for your application 🫡

Tailor Your CV: Make sure your CV highlights your experience with PySpark and ETL/ELT pipelines. We want to see how your skills align with the role, so don’t be shy about showcasing relevant projects or achievements!

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re the perfect fit for this Data Engineer role. Mention your experience in manufacturing environments if you have it, and show us your passion for data engineering.

Showcase Your Technical Skills: We’re looking for someone who knows their stuff! Be sure to include specific examples of your work with cloud data platforms and SQL. The more we can see your technical prowess, the better!

Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. We can’t wait to see what you bring to the table!

How to prepare for a job interview at TXP Technology x People

✨Know Your PySpark Inside Out

Make sure you brush up on your PySpark skills before the interview. Be ready to discuss how you've developed ETL/ELT pipelines in the past, and have examples at hand that showcase your problem-solving abilities with data processing.

✨Familiarise Yourself with Cloud Data Platforms

Since the role involves working with cloud data platforms, do some research on the specific platforms the company uses. Being able to speak knowledgeably about your experience with these tools will definitely impress the interviewers.

✨Highlight Your Manufacturing Experience

If you have experience in manufacturing environments, make sure to highlight this during your interview. Discuss how your background can contribute to the Material Spend Project and provide concrete examples of relevant projects you've worked on.

✨Prepare for SQL Questions

Data quality is key in this role, so be prepared to answer questions about SQL. Brush up on your SQL skills and think of scenarios where you've ensured data integrity in your previous roles. This will show that you understand the importance of data quality.

Data Engineer: PySpark ETL & Lakehouse Pipelines (6m) in Dudley
TXP Technology x People
Location: Dudley

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>