At a Glance
- Tasks: Build and maintain data pipelines using Databricks, PySpark, and Azure.
- Company: Join a leading (re)insurance business transforming its data capabilities in the Lloyd's market.
- Benefits: Enjoy a competitive salary of £85,000 plus bonuses and benefits, with hybrid work options.
- Why this job: Be part of a dynamic team driving innovation in data solutions and decision-making.
- Qualifications: Experience with Databricks, PySpark, Azure Data Factory, and a solid understanding of Lakehouse architecture.
- Other info: Ideal for those interested in financial services and evolving data platforms like Microsoft Fabric.
The predicted salary is between 51000 - 85000 £ per year.
A Data Engineer is required for a fast-evolving (re)insurance business at the heart of the Lloyd's market, currently undergoing a major data transformation. With a strong foundation in the industry and a clear vision for the future, they're investing heavily in their data capabilities to build a best-in-class platform that supports smarter, faster decision-making across the business.
As part of this journey, they're looking for a Data Engineer to join their growing team. This is a hands-on role focused on building scalable data pipelines and enhancing a modern Lakehouse architecture using Databricks, PySpark, and Azure. The environment is currently hybrid cloud and on-prem, with a strategic move towards Microsoft Fabric - so experience across both is highly valued.
What you'll be doing:
- Building and maintaining robust data pipelines using Databricks, PySpark, and Azure Data Factory.
- Enhance and maintain a Lakehouse architecture using Medallion principles.
- Working across both cloud and on-prem environments, supporting the transition to Microsoft Fabric.
- Collaborating with stakeholders across Underwriting, Actuarial, and Finance to deliver high-impact data solutions.
- Support DevOps practices and CI/CD workflows in Azure.
Technical Requirements:
- Strong hands-on experience with:
- Databricks (including pipeline development and orchestration)
- PySpark and Python for data transformation and processing
- Azure Data Factory for data integration
- Medallion architecture and Lakehouse design principles
- Hybrid cloud/on-prem environments
- Microsoft Azure ecosystem
- Microsoft Fabric and its evolving role in enterprise data platforms
- Azure DevOps for CI/CD and deployment
- T-SQL and dimensional modelling (Kimball methodology)
Apply now or get in touch to find out more - alexh@pioneer-search.com
Data Engineer (City Of London) employer: Pioneer Search
Contact Detail:
Pioneer Search Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (City Of London)
✨Tip Number 1
Familiarise yourself with the specific technologies mentioned in the job description, such as Databricks, PySpark, and Azure. Consider building a small project or contributing to open-source projects that utilise these tools to demonstrate your hands-on experience.
✨Tip Number 2
Network with professionals in the insurance and data engineering sectors. Attend industry meetups or webinars focused on data transformation and cloud technologies to connect with potential colleagues and learn more about the company culture.
✨Tip Number 3
Prepare to discuss your experience with hybrid cloud environments and how you've managed transitions between on-prem and cloud solutions. Be ready to share specific examples of challenges you've faced and how you overcame them.
✨Tip Number 4
Research the company’s current data initiatives and their strategic move towards Microsoft Fabric. Understanding their goals will help you tailor your conversations during interviews and show that you're genuinely interested in contributing to their vision.
We think you need these skills to ace Data Engineer (City Of London)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience with Databricks, PySpark, and Azure. Use specific examples of projects where you've built data pipelines or worked with Lakehouse architecture to demonstrate your skills.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention your understanding of their data transformation journey and how your background in hybrid cloud environments aligns with their needs.
Showcase Technical Skills: Clearly outline your technical skills related to the job description. Include your hands-on experience with Azure Data Factory, DevOps practices, and any exposure to Microsoft Fabric, as these are crucial for the role.
Highlight Collaboration Experience: Since the role involves working with various stakeholders, include examples of past collaborations with teams such as Underwriting, Actuarial, or Finance. This will show your ability to deliver high-impact data solutions.
How to prepare for a job interview at Pioneer Search
✨Showcase Your Technical Skills
Be prepared to discuss your hands-on experience with Databricks, PySpark, and Azure Data Factory. Highlight specific projects where you've built data pipelines or enhanced Lakehouse architectures, as this will demonstrate your capability to meet the job requirements.
✨Understand the Business Context
Research the (re)insurance industry and the Lloyd's market. Understanding how data impacts decision-making in these sectors will help you articulate how your skills can contribute to the company's goals during the interview.
✨Prepare for Scenario-Based Questions
Expect questions that assess your problem-solving abilities in real-world scenarios. Be ready to explain how you would approach building scalable data solutions or transitioning to Microsoft Fabric, showcasing your analytical thinking and technical expertise.
✨Emphasise Collaboration Skills
Since the role involves working with various stakeholders, be sure to highlight your experience in collaborating with teams across different functions. Share examples of how you've successfully communicated technical concepts to non-technical colleagues, which is crucial in a cross-functional environment.