At a Glance
- Tasks: Design and optimise data pipelines using Azure Databricks for smarter decision-making.
- Company: Join Somerset Bridge Group, a leader in innovative insurance solutions with a commitment to excellence.
- Benefits: Enjoy hybrid working, generous leave, healthcare cash plans, and exclusive discounts.
- Why this job: Be part of a dynamic team shaping the future of data in the insurance sector.
- Qualifications: Hands-on experience with Azure Data Factory, Databricks, and strong SQL skills required.
- Other info: Flexible working hours and a focus on personal development make this role unique.
The predicted salary is between 47000 - 58000 £ per year.
Department: [SBSS] Enterprise Data Management
Location: Newcastle
Compensation: £55,000 – £68,500 / year
Description
We\’re building something special — and we need a talented Data Engineer to help bring our Azure data platform to life.
This is your chance to work on a greenfield Enterprise Data Warehouse programme in the insurance sector, shaping data pipelines and platforms that power smarter decisions, better pricing, and sharper customer insights.
The Data Engineer will design, build, and optimise scalable data pipelines within Azure Databricks, ensuring high-quality, reliable data is available to support pricing, underwriting, claims, and operational decision-making. This role is critical in modernising SBG’s cloud-based data infrastructure, ensuring compliance with FCA/PRA regulations, and enabling AI-driven analytics and automation.
By leveraging Azure-native services, such as Azure Data Factory (ADF) for orchestration, Delta Lake for ACID-compliant data storage, and Databricks Structured Streaming for real-time data processing, the Data Engineer will help unlock insights, enhance pricing accuracy, and drive innovation. The role also includes optimising Databricks query performance, implementing robust security controls (RBAC, Unity Catalog), and ensuring enterprise-wide data reliability.
Working closely with Data Architects, Pricing Teams, Data Analysts, and IT, this role will ensure our Azure Databricks data ecosystem is scalable, efficient, and aligned with business objectives. Additionally, the Data Engineer will contribute to cost optimisation, governance, and automation within Azure’s modern data platform.
What you\’ll be responsible for
- Data Pipeline Development – Design, build, and maintain scalable ELT pipelines using Azure Databricks, Azure Data Factory (ADF), and Delta Lake to automate real-time and batch data ingestion.
- Cloud Data Engineering – Develop and optimise data solutions within Azure, ensuring efficiency, cost-effectiveness, and scalability, leveraging Azure Synapse Analytics, ADLS Gen2, and Databricks Workflows
- Data Modelling & Architecture – Implement robust data models to support analytics, reporting, and machine learning, using Delta Lake and Azure Synapse.
- Automation & Observability – Use Databricks Workflows, dbt, and Azure Monitor to manage transformations, monitor query execution, and implement data reliability checks.
- Data Quality & Governance – Ensure data integrity, accuracy, and compliance with industry regulations (FCA, Data Protection Act, PRA) using Databricks Unity Catalog and Azure Purview.
- Collaboration & Stakeholder Engagement – Work closely with Data Scientists, Pricing, Underwriting, and IT to deliver data-driven solutions aligned with business objectives.
- Data Governance & Security – Implement RBAC, column-level security, row-access policies, and data masking to protect sensitive customer data and ensure FCA/PRA regulatory compliance.
- Innovation & Continuous Improvement – Identify and implement emerging data technologies within the Azure ecosystem, such as Delta Live Tables (DLT), Structured Streaming, and AI-driven analytics to enhance business capabilities.
What you\’ll need
- Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks.
- Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation.
- Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing.
- Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics.
- Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks.
- Strong Python (PySpark) skills for big data processing and automation.
- Experience with Scala (optional but preferred for advanced Spark applications).
- Experience working with Databricks Workflows & Jobs for data orchestration.
- Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference.
- Experience with data modelling techniques to support analytics and reporting.
- Familiarity with real-time data processing and API integrations (e.g., Kafka, Spark Streaming).
- Proficiency in CI/CD pipelines for data deployment using Azure DevOps, GitHub Actions, or Terraform for Infrastructure as Code (IaC).
- Understanding of MLOps principles, including continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning models.
- Experience with performance tuning and query optimisation for efficient data workflows.
- Strong understanding of query optimisation techniques in Databricks (caching, partitioning, indexing, and auto-scaling clusters).
- Experience monitoring Databricks workloads using Azure Monitor, Log Analytics, and Databricks Performance Insight
- Familiarity with cost optimization strategies in Databricks and ADLS Gen2 (e.g., managing compute resources efficiently).
- Problem-solving mindset – Ability to diagnose issues and implement efficient solution
- Experience implementing Databricks Unity Catalog for data governance, access control, and lineage tracking.
- Understanding of Azure Purview for data cataloging and metadata management.
- Familiarity with object-level and row-level security in Azure Synapse and Databricks
- Experience working with Azure Event Hubs, Azure Data Explorer, or Kafka for real-time data streaming.
- Hands-on experience with Databricks Structured Streaming for real-time and near-real-time data processing.
- Understanding of Delta Live Tables (DLT) for automated ELT and real-time transformations.
- Analytical thinking – Strong ability to translate business needs into technical data solution
- Attention to detail – Ensures accuracy, reliability, and quality of data.
- Communication skills – Clearly conveys technical concepts to non-technical stakeholders.
- Collaboration – Works effectively with cross-functional teams, including Pricing, Underwriting, and IT.
- Adaptability – Thrives in a fast-paced, agile environment with evolving priorities.
- Stakeholder management – Builds strong relationships and understands business requirements
- Innovation-driven – Stays up to date with emerging technologies and industry trends.
Our Benefits
- Hybrid working – 2 days in the office and 3 days working from home
- 25 days annual leave, rising to 27 days over 2 years’ service and 30 days after 5 years’ service. Plus bank holidays!
- Discretionary annual bonus
- Pension scheme – 5% employee, 6% employer
- Flexible working – we will always consider applications for those who require less than the advertised hours
- Flexi-time
- Healthcare Cash Plan – claim cashback on a variety of everyday healthcare costs
- Electric vehicle – salary sacrifice scheme
- 100’s of exclusive retailer discounts
- Professional wellbeing, health & fitness app – Wrkit
- Enhanced parental leave, including time off for IVF appointments
- Religious bank holidays – if you don’t celebrate Christmas and Easter, you can use these annual leave days on other occasions throughout the year.
- Life Assurance – 4 times your salary
- 25% Car Insurance Discount
- 20% Travel Insurance Discount
- Cycle to Work Scheme
- Employee Referral Scheme
- Community support day
About Somerset Bridge Group
Somerset Bridge Group is dedicated to delivering fair products and innovative services in the insurance industry. Our group focuses on underwriting, broking, and claims handling to provide sustainable and innovative insurance solutions. Somerset Bridge Insurance Services Limited, operating under GoSkippy and Vavista, offers insurance coverage to over 700,000 customers. Somerset Bridge Limited handles underwriting and claims, processing almost 50,000 claims annually. Somerset Bridge Shared Services Limited provides essential support functions to ensure operational efficiency and compliance. With a strong commitment to values, culture, and customer service excellence, Somerset Bridge Group is recognised for its industry awards and growth. Join us to be part of a dynamic team that fosters creative thinking and personal development.
We are very proud to have been awarded a Silver Accreditation from Investors in People! We recognise that all of our people contribute to our success. That\’s why we are always looking for talented people to join our team – people who share our vision, who are passionate about what they do, and who want to be part of something special.
Equal Opportunity Employer
Somerset Bridge Group is committed to creating a diverse environment and is proud to be an Equal Opportunity Employer. We prohibit discrimination or harassment of any kind based on race, color, religion, national origin, sexual orientation, gender, gender identity or expression, age, pregnancy, physical or mental disability, genetic factors or other characteristics protected by law. SBG makes hiring decisions based solely on qualifications, skills and business requirements.
#J-18808-Ljbffr
Data Engineer employer: Somerset Bridge Group
Contact Detail:
Somerset Bridge Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Familiarise yourself with Azure Databricks and its features, especially around data pipeline development. Understanding how to leverage tools like Azure Data Factory and Delta Lake will give you a significant edge during discussions.
✨Tip Number 2
Network with professionals in the insurance sector who are already working with data engineering. Engaging in conversations about their experiences can provide insights into the specific challenges and expectations of the role.
✨Tip Number 3
Stay updated on the latest trends in data engineering and cloud technologies. Being able to discuss recent advancements or innovations in Azure services during your interactions can demonstrate your passion and commitment to the field.
✨Tip Number 4
Prepare to showcase your problem-solving skills by thinking through potential scenarios you might encounter in the role. Being ready to discuss how you would approach real-world data challenges can set you apart from other candidates.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience and skills that align with the Data Engineer role. Focus on your hands-on experience with Azure Data Factory, Databricks, and SQL, as well as any projects that demonstrate your ability to build scalable data pipelines.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention specific projects or technologies you have worked with that relate to the job description, such as Delta Lake or real-time data processing, to show your suitability for the position.
Showcase Problem-Solving Skills: Use examples from your past experiences to illustrate your problem-solving mindset. Discuss how you've diagnosed issues in data workflows and implemented efficient solutions, particularly in cloud environments like Azure.
Highlight Collaboration Experience: Since the role involves working closely with various teams, emphasise your collaboration skills. Provide examples of how you've successfully engaged with cross-functional teams, such as Data Scientists or IT, to deliver data-driven solutions.
How to prepare for a job interview at Somerset Bridge Group
✨Showcase Your Technical Skills
Be prepared to discuss your hands-on experience with Azure Data Factory, Databricks, and SQL. Highlight specific projects where you've built ELT pipelines or optimised data workflows, as this will demonstrate your capability to handle the technical demands of the role.
✨Understand the Business Context
Familiarise yourself with the insurance sector and how data engineering impacts pricing, underwriting, and claims. Being able to articulate how your work can drive smarter decisions and enhance customer insights will set you apart from other candidates.
✨Prepare for Scenario-Based Questions
Expect questions that assess your problem-solving skills and ability to work under pressure. Prepare examples of challenges you've faced in previous roles, particularly related to data quality, governance, or performance tuning, and how you resolved them.
✨Demonstrate Collaboration Skills
Since the role involves working closely with various teams, be ready to discuss your experience collaborating with Data Scientists, Pricing Teams, and IT. Share examples of how you've effectively communicated technical concepts to non-technical stakeholders to ensure alignment on project goals.