At a Glance
- Tasks: Design and optimise data pipelines using Azure Databricks for smarter decision-making.
- Company: Join Somerset Bridge Group, a leader in the insurance sector, driving innovation through data.
- Benefits: Enjoy hybrid working, generous leave, healthcare cash plan, and exclusive discounts.
- Why this job: Be part of a greenfield project that shapes the future of data in insurance.
- Qualifications: Experience with Azure Data Factory, Databricks, and strong SQL skills required.
- Other info: Flexible hours and a supportive culture make this role perfect for students.
The predicted salary is between 47000 - 61000 £ per year.
Join to apply for the Data Engineer role at Somerset Bridge Group
Join to apply for the Data Engineer role at Somerset Bridge Group
Get AI-powered advice on this job and more exclusive features.
Application Deadline: 27 June 2025
Department: [SBSS] Enterprise Data Management
Location: Newcastle
Compensation: £55,000 – £68,500 / year
Description
We\’re building something special — and we need a talented Data Engineer to help bring our Azure data platform to life.
This is your chance to work on a greenfield Enterprise Data Warehouse programme in the insurance sector, shaping data pipelines and platforms that power smarter decisions, better pricing, and sharper customer insights.
The Data Engineer will design, build, and optimise scalable data pipelines within Azure Databricks, ensuring high-quality, reliable data is available to support pricing, underwriting, claims, and operational decision-making. This role is critical in modernising SBG’s cloud-based data infrastructure, ensuring compliance with FCA/PRA regulations, and enabling AI-driven analytics and automation.
By leveraging Azure-native services, such as Azure Data Factory (ADF) for orchestration, Delta Lake for ACID-compliant data storage, and Databricks Structured Streaming for real-time data processing, the Data Engineer will help unlock insights, enhance pricing accuracy, and drive innovation. The role also includes optimising Databricks query performance, implementing robust security controls (RBAC, Unity Catalog), and ensuring enterprise-wide data reliability.
Working closely with Data Architects, Pricing Teams, Data Analysts, and IT, this role will ensure our Azure Databricks data ecosystem is scalable, efficient, and aligned with business objectives. Additionally, the Data Engineer will contribute to cost optimisation, governance, and automation within Azure’s modern data platform.
Key Responsibilities
- Data Pipeline Development – Design, build, and maintain scalable ELT pipelines using Azure Databricks, Azure Data Factory (ADF), and Delta Lake to automate real-time and batch data ingestion.
- Cloud Data Engineering – Develop and optimise data solutions within Azure, ensuring efficiency, cost-effectiveness, and scalability, leveraging Azure Synapse Analytics, ADLS Gen2, and Databricks Workflows
- Data Modelling & Architecture – Implement robust data models to support analytics, reporting, and machine learning, using Delta Lake and Azure Synapse.
- Automation & Observability – Use Databricks Workflows, dbt, and Azure Monitor to manage transformations, monitor query execution, and implement data reliability checks.
- Data Quality & Governance – Ensure data integrity, accuracy, and compliance with industry regulations (FCA, Data Protection Act, PRA) using Databricks Unity Catalog and Azure Purview.
- Collaboration & Stakeholder Engagement – Work closely with Data Scientists, Pricing, Underwriting, and IT to deliver data-driven solutions aligned with business objectives.
- Data Governance & Security – Implement RBAC, column-level security, row-access policies, and data masking to protect sensitive customer data and ensure FCA/PRA regulatory compliance.
- Innovation & Continuous Improvement – Identify and implement emerging data technologies within the Azure ecosystem, such as Delta Live Tables (DLT), Structured Streaming, and AI-driven analytics to enhance business capabilities.
Skills, Knowledge And Expertise
- Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks.
- Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation.
- Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing.
- Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics.
- Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks.
- Strong Python (PySpark) skills for big data processing and automation.
- Experience with Scala (optional but preferred for advanced Spark applications).
- Experience working with Databricks Workflows & Jobs for data orchestration.
- Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference.
- Experience with data modelling techniques to support analytics and reporting.
- Familiarity with real-time data processing and API integrations (e.g., Kafka, Spark Streaming).
- Proficiency in CI/CD pipelines for data deployment using Azure DevOps, GitHub Actions, or Terraform for Infrastructure as Code (IaC).
- Understanding of MLOps principles, including continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning models.
- Experience with performance tuning and query optimisation for efficient data workflows.
- Strong understanding of query optimisation techniques in Databricks (caching, partitioning, indexing, and auto-scaling clusters).
- Experience monitoring Databricks workloads using Azure Monitor, Log Analytics, and Databricks Performance Insight
- Familiarity with cost optimization strategies in Databricks and ADLS Gen2 (e.g., managing compute resources efficiently).
- Problem-solving mindset – Ability to diagnose issues and implement efficient solution
- Experience implementing Databricks Unity Catalog for data governance, access control, and lineage tracking.
- Understanding of Azure Purview for data cataloging and metadata management.
- Familiarity with object-level and row-level security in Azure Synapse and Databricks
- Experience working with Azure Event Hubs, Azure Data Explorer, or Kafka for real-time data streaming.
- Hands-on experience with Databricks Structured Streaming for real-time and near-real-time data processing.
- Understanding of Delta Live Tables (DLT) for automated ELT and real-time transformations.
- Analytical thinking – Strong ability to translate business needs into technical data solution
- Attention to detail – Ensures accuracy, reliability, and quality of data.
- Communication skills – Clearly conveys technical concepts to non-technical stakeholders.
- Collaboration – Works effectively with cross-functional teams, including Pricing, Underwriting, and IT.
- Adaptability – Thrives in a fast-paced, agile environment with evolving priorities.
- Stakeholder management – Builds strong relationships and understands business requirements
- Innovation-driven – Stays up to date with emerging technologies and industry trends.
Our Benefits
- Hybrid working – 2 days in the office and 3 days working from home
- 25 days annual leave, rising to 27 days over 2 years’ service and 30 days after 5 years’ service. Plus bank holidays!
- Discretionary annual bonus
- Pension scheme – 5% employee, 6% employer
- Flexible working – we will always consider applications for those who require less than the advertised hours
- Flexi-time
- Healthcare Cash Plan – claim cashback on a variety of everyday healthcare costs
- Electric vehicle – salary sacrifice scheme
- 100’s of exclusive retailer discounts
- Professional wellbeing, health & fitness app – Wrkit
- Enhanced parental leave, including time off for IVF appointments
- Religious bank holidays – if you don’t celebrate Christmas and Easter, you can use these annual leave days on other occasions throughout the year.
- Life Assurance – 4 times your salary
- 25% Car Insurance Discount
- 20% Travel Insurance Discount
- Cycle to Work Scheme
- Employee Referral Scheme
- Community support day
Seniority level
-
Seniority level
Associate
Employment type
-
Employment type
Full-time
Job function
-
Job function
Information Technology
-
Industries
Insurance
Referrals increase your chances of interviewing at Somerset Bridge Group by 2x
Sign in to set job alerts for “Data Engineer” roles.
Newcastle Upon Tyne, England, United Kingdom 1 day ago
Newcastle Upon Tyne, England, United Kingdom 10 hours ago
Newcastle Upon Tyne, England, United Kingdom 1 month ago
Newcastle Upon Tyne, England, United Kingdom 3 weeks ago
Durham, England, United Kingdom £37,000.00-£50,000.00 7 hours ago
Houghton-Le-Spring, England, United Kingdom 1 week ago
Newcastle Upon Tyne, England, United Kingdom 3 weeks ago
Wideopen, England, United Kingdom 6 days ago
Newcastle Upon Tyne, England, United Kingdom 1 week ago
Newcastle Upon Tyne, England, United Kingdom 2 weeks ago
Newcastle Upon Tyne, England, United Kingdom 5 months ago
Newcastle Upon Tyne, England, United Kingdom 2 weeks ago
Newcastle Upon Tyne, England, United Kingdom 2 weeks ago
Newcastle Upon Tyne, England, United Kingdom 1 week ago
Newcastle Upon Tyne, England, United Kingdom 3 weeks ago
Newcastle Upon Tyne, England, United Kingdom 4 days ago
Newcastle Upon Tyne, England, United Kingdom 4 days ago
Newcastle Upon Tyne, England, United Kingdom 1 week ago
Why this Senior Data Engineer role is like Star Wars.
Newcastle Upon Tyne, England, United Kingdom 2 weeks ago
Newcastle Upon Tyne, England, United Kingdom 3 weeks ago
Durham, England, United Kingdom 4 hours ago
Newcastle Upon Tyne, England, United Kingdom 6 days ago
Newcastle Upon Tyne, England, United Kingdom 1 week ago
Software Development Engineer (Data Integration)
Newcastle Upon Tyne, England, United Kingdom 2 weeks ago
Newcastle Upon Tyne, England, United Kingdom 4 hours ago
Tyne And Wear, England, United Kingdom 6 months ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr
Data Engineer employer: Somerset Bridge Group
Contact Detail:
Somerset Bridge Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Familiarise yourself with Azure Databricks and its features, as this role heavily relies on it. Consider building a small project or contributing to an open-source one that uses Azure Databricks to showcase your hands-on experience.
✨Tip Number 2
Network with professionals in the data engineering field, especially those who work with Azure technologies. Attend meetups or webinars to connect with potential colleagues and learn about industry trends that could give you an edge.
✨Tip Number 3
Stay updated on the latest developments in data governance and compliance, particularly FCA/PRA regulations. Being knowledgeable about these can set you apart as a candidate who understands the importance of data integrity in the insurance sector.
✨Tip Number 4
Prepare to discuss your problem-solving skills and how you've tackled challenges in previous projects. Be ready to provide specific examples of how you've optimised data pipelines or improved data quality, as this will demonstrate your practical experience.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience and skills that align with the Data Engineer role. Focus on your hands-on experience with Azure Data Factory, Databricks, and SQL, as these are crucial for the position.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention specific projects or experiences that demonstrate your ability to design and optimise data pipelines, and how you can contribute to Somerset Bridge Group's goals.
Showcase Relevant Projects: If you've worked on any relevant projects, especially those involving Azure technologies or data engineering, be sure to include them in your application. Describe your role, the technologies used, and the impact of the project.
Highlight Soft Skills: Don't forget to mention your soft skills, such as communication and collaboration. The role requires working closely with various teams, so showcasing your ability to convey technical concepts to non-technical stakeholders will strengthen your application.
How to prepare for a job interview at Somerset Bridge Group
✨Showcase Your Technical Skills
Be prepared to discuss your hands-on experience with Azure Data Factory, Databricks, and SQL. Highlight specific projects where you've built ELT pipelines or optimised data solutions, as this will demonstrate your capability to handle the technical demands of the role.
✨Understand the Business Context
Familiarise yourself with the insurance sector and how data engineering impacts pricing, underwriting, and claims. Being able to articulate how your work can drive smarter decisions and enhance customer insights will set you apart from other candidates.
✨Prepare for Problem-Solving Questions
Expect to face scenario-based questions that assess your problem-solving skills. Think about past challenges you've encountered in data engineering and how you resolved them, particularly in relation to data quality, governance, and performance tuning.
✨Demonstrate Collaboration Skills
Since the role involves working closely with various teams, be ready to discuss your experience collaborating with Data Scientists, Pricing Teams, and IT. Share examples of how you've effectively communicated technical concepts to non-technical stakeholders to ensure alignment on project goals.