At a Glance
- Tasks: Build and deploy robust data solutions for a major Business Data Service Project.
- Company: Join a forward-thinking team focused on data innovation.
- Benefits: Competitive day rate, flexible working, and opportunities for professional growth.
- Why this job: Make a real impact in data engineering while working with cutting-edge technologies.
- Qualifications: Experience with Python, PySpark, SQL, and cloud data platforms required.
- Other info: Onsite presence needed 2-3 days a month in Dudley.
The predicted salary is between 45000 - 60000 £ per year.
Below covers everything you need to know about what this opportunity entails, as well as what is expected from applicants.
Day rate: £475pd-£520pd (Inside IR35)
Contract: 6 months initial
We are currently recruiting for a Data Engineer to be part of a team on a Business Data Service Project, which is a Data Warehouse Replacement & Report simplification project. You will be responsible for ensuring all data products and solutions created in the business insights ecosystem are fit for purpose, resilient, robust and reliable.
You will play a pivotal role that builds, tests and deploys Data Warehouse solutions. This will cover the lifecycle for the planning, ingestion, transformation, consolidation and aggregation of data from source to target in the Data Warehouse environment.
Skills and experience required:
- Strong experience developing ETL/ELT pipelines using PySpark and Python
- Hands-on experience with Microsoft Fabric lakehouse or similar cloud data platforms (Azure Synapse Analytics, Databricks)
- Proficiency in working with Jupyter/Fabric Notebooks for data engineering workflows
- Solid understanding of data lakehouse architecture patterns and medallion architecture
- Experience working with Delta Lake or similar lakehouse storage formats
- Strong SQL skills for data manipulation, transformation, and quality validation
This is a role that will require 2/3 days per month onsite in Dudley, West Midlands. Please consider this when applying for the role.
If you are interested in the role and would like to apply, please click on the link for immediate consideration.
Data Engineer in Brierley Hill employer: TXP
Contact Detail:
TXP Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in Brierley Hill
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field. Attend meetups or webinars, and don’t be shy about asking for introductions. You never know who might have the inside scoop on job openings.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving ETL/ELT pipelines with PySpark and Python. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your SQL skills and understanding data lakehouse architecture patterns. Be ready to discuss your experience with Microsoft Fabric and how you've tackled challenges in past projects.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace Data Engineer in Brierley Hill
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Python, PySpark, and SQL. We want to see how your skills align with the Data Engineer role, so don’t be shy about showcasing relevant projects or achievements!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how your background makes you a perfect fit for our Business Data Service Project. Keep it engaging and personal!
Showcase Your Technical Skills: When filling out your application, be specific about your hands-on experience with tools like Microsoft Fabric lakehouse or Azure Synapse Analytics. We love seeing concrete examples of how you've used these technologies in your work.
Apply Through Our Website: Don’t forget to apply through our website for immediate consideration! It’s the best way for us to receive your application and get you into the process quickly. We can’t wait to hear from you!
How to prepare for a job interview at TXP
✨Know Your Tech Stack
Make sure you brush up on your Python, PySpark, and SQL skills before the interview. Be ready to discuss how you've used these technologies in past projects, especially in developing ETL/ELT pipelines. Having specific examples will show that you’re not just familiar with the tools, but that you can apply them effectively.
✨Understand Data Architecture
Familiarise yourself with data lakehouse architecture patterns and medallion architecture. Be prepared to explain how these concepts relate to the role and how they can impact the success of a Data Warehouse project. This shows that you’re not only technically skilled but also understand the bigger picture.
✨Showcase Your Problem-Solving Skills
During the interview, be ready to tackle some real-world scenarios or problems related to data engineering. Think about challenges you've faced in previous roles and how you overcame them. This will demonstrate your critical thinking and ability to adapt, which are crucial for this position.
✨Ask Insightful Questions
Prepare a few thoughtful questions about the Business Data Service Project and the team dynamics. This not only shows your interest in the role but also helps you gauge if the company culture aligns with your values. Plus, it gives you a chance to learn more about how you can contribute effectively.