At a Glance
- Tasks: Join a dynamic team to build scalable data pipelines and enhance modern data architecture.
- Company: Be part of a leading (re)insurance business transforming its data capabilities in the Lloyd's market.
- Benefits: Enjoy a competitive salary, bonus, and benefits with hybrid working options.
- Why this job: Contribute to impactful data solutions while collaborating with diverse teams in a fast-paced environment.
- Qualifications: Experience with Databricks, PySpark, Azure, and a background in financial services is preferred.
- Other info: This is a permanent role with opportunities for growth in a cutting-edge data transformation journey.
The predicted salary is between 51000 - 85000 £ per year.
A Data Engineer is required for a fast-evolving (re)insurance business at the heart of the Lloyd's market, currently undergoing a major data transformation. With a strong foundation in the industry and a clear vision for the future, they are investing heavily in their data capabilities to build a best-in-class platform that supports smarter, faster decision-making across the business.
As part of this journey, they are looking for a Data Engineer to join their growing team. This is a hands-on role focused on building scalable data pipelines and enhancing a modern Lakehouse architecture using Databricks, PySpark, and Azure. The environment is currently hybrid cloud and on-prem, with a strategic move towards Microsoft Fabric - so experience across both is highly valued.
- Building and maintaining robust data pipelines using Databricks, PySpark, and Azure Data Factory.
- Working across both cloud and on-prem environments, supporting the transition to Microsoft Fabric.
- Collaborating with stakeholders across Underwriting, Actuarial, and Finance to deliver high-impact data solutions.
- Support DevOps practices and CI/CD workflows in Azure.
- Experience in Financial Services or Lloyd's market is a plus.
Data Engineer SQL - Hybrid Working employer: Pioneer Search
Contact Detail:
Pioneer Search Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer SQL - Hybrid Working
✨Tip Number 1
Familiarise yourself with the specific technologies mentioned in the job description, such as Databricks, PySpark, and Azure. Consider building a small project or contributing to open-source projects that utilise these tools to demonstrate your hands-on experience.
✨Tip Number 2
Network with professionals in the Lloyd's market or financial services sector. Attend industry meetups or webinars to connect with potential colleagues and learn more about the data transformation initiatives happening in the field.
✨Tip Number 3
Showcase your understanding of hybrid cloud environments and Microsoft Fabric by discussing relevant case studies or experiences during interviews. This will highlight your ability to adapt to the company's evolving data architecture.
✨Tip Number 4
Prepare to discuss your experience with CI/CD workflows and DevOps practices in Azure. Being able to articulate how you've implemented these processes in past roles will set you apart as a candidate who can contribute to their data engineering efforts effectively.
We think you need these skills to ace Data Engineer SQL - Hybrid Working
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience with Databricks, PySpark, and Azure. Use specific examples of projects where you've built data pipelines or worked in hybrid cloud environments to demonstrate your skills.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention your understanding of their data transformation journey and how your background in financial services or the Lloyd's market can contribute to their goals.
Showcase Technical Skills: Clearly outline your technical skills related to Azure Data Factory, CI/CD workflows, and any experience with Microsoft Fabric. Use bullet points for clarity and ensure you provide context for each skill.
Highlight Collaboration Experience: Since the role involves working with various stakeholders, include examples of past collaborations with teams such as Underwriting, Actuarial, or Finance. This will show your ability to deliver high-impact data solutions in a team setting.
How to prepare for a job interview at Pioneer Search
✨Showcase Your Technical Skills
Be prepared to discuss your experience with Databricks, PySpark, and Azure in detail. Bring examples of projects where you've built data pipelines or worked with hybrid cloud environments, as this will demonstrate your hands-on expertise.
✨Understand the Business Context
Research the (re)insurance industry and the specific challenges it faces regarding data transformation. Showing that you understand how your role as a Data Engineer can impact decision-making across Underwriting, Actuarial, and Finance will impress your interviewers.
✨Emphasise Collaboration Skills
Since the role involves working with various stakeholders, be ready to share examples of how you've successfully collaborated with different teams in the past. Highlight your communication skills and ability to translate technical concepts for non-technical audiences.
✨Prepare for DevOps Discussions
Familiarise yourself with DevOps practices and CI/CD workflows, especially in the context of Azure. Be ready to discuss how you've implemented these practices in previous roles, as this is crucial for supporting the company's data solutions.