At a Glance
- Tasks: Join a team to create a scalable data platform and integrate various data systems.
- Company: An established retailer transitioning to Microsoft Fabric, based near Bristol.
- Benefits: Enjoy a salary up to £70,000, 24 days leave, and private healthcare.
- Other info: Hybrid role with 3 days in the office and 2 days remote work.
- Why this job: Be part of a collaborative culture focused on data-driven decisions and innovation.
- Qualifications: Experience in Python/PySpark, Azure Data Factory, and strong SQL skills required.
The predicted salary is between 42000 - 84000 £ per year.
I am looking for a Senior Data Engineer to join an established retailer as they look to migrate to Microsoft Fabric. They are looking to integrate a number of their data systems into one fully-scalable solution and create a brand-new data warehouse that will enable data to be readily available for reporting.
The business has office locations around the UK, however their technology hub is based just outside of Bristol and you would work collaboratively with the Engineering manager and other Engineers. You will also work collaboratively with the wider IT function to enable the organisation to leverage its data and orchestrate data-driven decisions.
As part of this role, you will be responsible for:
- Helping create a scalable data platform and integrate data from various sources
- Creating robust data pipelines
- Writing robust code to transform and manipulate data
- Working with senior members of the technical team to advise on best practice of modern data platform architecture
- Contributing to the planning of the organisation's long term data strategy, maintaining data integrity and a focus on GDPR at all times
To be successful in this role you will have:
- Coding experience with Python/PySpark
- Data pipeline development experience utilising Azure Data Factory or Fabric Pipelines
- Experience working within an Azure environment such as Lakehouse Architecture, Data Lake, Delta Lake, Azure Synapse
- Strong SQL knowledge
- Strong communication skills
This is a hybrid role based out of the organisation's office in Bristol 3 times per week with the additional 2 days working from home.
Some of the benefits included in this role are:
- Salary up to £70,000 depending on experience
- 24 days annual leave plus bank holidays (rising to 27 days with service)
- Company pension scheme
- Healthcare benefits such as private healthcare
This is just a brief overview of the role. For the full information, simply apply to the role with your CV, and I will call you to discuss further. My client is looking to begin the interview process ASAP, so don't miss out, APPLY now!
Senior Data Engineer employer: Tenth Revolution Group
Contact Detail:
Tenth Revolution Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Tip Number 1
Familiarise yourself with Microsoft Fabric and Azure services. Understanding how these platforms work will not only help you in interviews but also demonstrate your commitment to the role and the company's technology stack.
✨Tip Number 2
Network with current employees or professionals in similar roles. Engaging with them on platforms like LinkedIn can provide insights into the company culture and expectations, which can be invaluable during your interview.
✨Tip Number 3
Prepare to discuss your experience with data pipeline development and coding in Python/PySpark. Be ready to share specific examples of projects you've worked on that align with the responsibilities outlined in the job description.
✨Tip Number 4
Brush up on your SQL skills and be prepared for technical questions. Since strong SQL knowledge is a requirement, demonstrating your proficiency through practical examples or problem-solving scenarios can set you apart from other candidates.
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in data engineering, particularly with Python, Azure Data Factory, and SQL. Emphasise any projects where you've created scalable data platforms or integrated data systems.
Craft a Compelling Cover Letter: Write a cover letter that showcases your passion for data engineering and your understanding of the company's goals. Mention your experience with data pipelines and how you can contribute to their long-term data strategy.
Highlight Technical Skills: In your application, clearly list your technical skills related to the job description, such as coding in Python/PySpark, working with Azure environments, and your strong SQL knowledge. This will help you stand out to the hiring team.
Showcase Communication Skills: Since strong communication skills are essential for this role, include examples in your application where you've successfully collaborated with teams or communicated complex technical concepts to non-technical stakeholders.
How to prepare for a job interview at Tenth Revolution Group
✨Showcase Your Technical Skills
Make sure to highlight your coding experience with Python and PySpark during the interview. Be prepared to discuss specific projects where you've developed data pipelines using Azure Data Factory or Fabric Pipelines, as this will demonstrate your hands-on expertise.
✨Understand the Company’s Data Strategy
Research the company’s current data systems and their migration plans to Microsoft Fabric. Being knowledgeable about their long-term data strategy will show your genuine interest in the role and help you align your answers with their goals.
✨Prepare for Scenario-Based Questions
Expect questions that assess your problem-solving skills in real-world scenarios. Think of examples where you've created scalable data platforms or maintained data integrity while focusing on GDPR compliance, as these are crucial aspects of the role.
✨Emphasise Communication Skills
Since collaboration is key in this role, be ready to discuss how you've effectively communicated with technical teams in the past. Highlight any experiences where you’ve advised on best practices or worked cross-functionally to leverage data for decision-making.