At a Glance
- Tasks: Design and build robust data pipelines while optimising data models for analytics.
- Company: Fast-scaling InsurTech business with a strong digital group backing.
- Benefits: Hybrid working, high ownership, and visibility in a collaborative team.
- Why this job: Shape the future of data infrastructure and make a real impact.
- Qualifications: Strong Data Engineering experience with advanced SQL and GCP knowledge.
- Other info: Opportunity to mentor within a lean team and work in a dynamic environment.
The predicted salary is between 70000 - 90000 £ per year.
I’m currently partnered with a fast-scaling InsurTech business that’s investing heavily in its data platform as part of a wider digital group. They’re looking for a Staff-level Data Engineer to play a key role in shaping the next phase of their data infrastructure particularly as they transition from Azure to GCP (BigQuery) over the next 12–18 months.
The Opportunity
This is a hands-on, high-impact IC role where you’ll take ownership of the data platform and help drive best practice across engineering, modelling, and data quality. You’ll be working closely with teams across the business (Product, Marketing, Pricing, Analytics), helping turn data requirements into scalable, reliable solutions.
What You’ll Be Doing
- Designing and building robust ETL/ELT pipelines
- Developing and optimising data models for analytics and reporting
- Driving improvements across data quality, performance, and scalability
- Supporting the migration to GCP / BigQuery
- Collaborating with analysts and stakeholders to deliver business-critical data solutions
- Acting as a technical lead / mentor within a lean data team
What They’re Looking For
- Strong experience in Data Engineering with advanced SQL skills
- Proven background building data pipelines and warehouse solutions
- Experience with Google Cloud Platform is essential and dbt (or similar modern data stack tools)
- Solid understanding of ETL/ELT and data modelling principles
- Comfortable working in a fast-paced, collaborative environment
- Experience with Git / CI/CD / DevOps practices
- (Python, APIs, and BI tooling exposure are a bonus, not essential)
The Environment
- Small, collaborative team with high ownership and visibility
- Strong backing from a well-established digital group
- A mix of start-up agility + enterprise scale
- Hybrid working: 2 days in London office + occasional travel to Fleet
If you’re a Data Engineer looking for a role where you can genuinely shape a platform, not just maintain it, this is well worth a conversation.
Staff Data Engineer - GCP in Slough employer: Arrows
Contact Detail:
Arrows Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Staff Data Engineer - GCP in Slough
✨Tip Number 1
Network like a pro! Reach out to people in the InsurTech space or those already working at the company. A friendly chat can give you insider info and might even lead to a referral.
✨Tip Number 2
Show off your skills! If you’ve got a portfolio of projects or GitHub repos, make sure to highlight them. Demonstrating your experience with GCP and data pipelines can really set you apart.
✨Tip Number 3
Prepare for the interview by brushing up on your SQL and data modelling principles. Be ready to discuss how you’d tackle real-world problems they face, especially around their migration to GCP.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who take that extra step!
We think you need these skills to ace Staff Data Engineer - GCP in Slough
Some tips for your application 🫡
Tailor Your CV: Make sure your CV speaks directly to the role of Staff Data Engineer. Highlight your experience with GCP, data pipelines, and any relevant projects that showcase your skills in SQL and ETL/ELT processes.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about data engineering and how you can contribute to our team. Mention specific experiences that align with the job description, especially around collaboration and driving data quality.
Showcase Your Technical Skills: Don’t shy away from listing your technical proficiencies! Make sure to include your experience with Git, CI/CD, and any exposure to Python or BI tools. We love seeing candidates who are well-rounded in their technical abilities.
Apply Through Our Website: We encourage you to apply through our website for a smoother application process. It helps us keep track of your application and ensures you don’t miss out on any important updates from us!
How to prepare for a job interview at Arrows
✨Know Your Data Stuff
Make sure you brush up on your data engineering skills, especially around ETL/ELT processes and data modelling principles. Be ready to discuss your experience with SQL and how you've built data pipelines in the past. This role is all about shaping the data platform, so show them you know your stuff!
✨Familiarise Yourself with GCP
Since this position involves a transition to Google Cloud Platform, it’s crucial to understand its features, particularly BigQuery. If you’ve worked with GCP before, prepare to share specific examples of how you’ve used it to solve problems or improve data quality.
✨Collaboration is Key
This role requires working closely with various teams like Product and Marketing. Think of examples where you’ve successfully collaborated with stakeholders to deliver data solutions. Highlight your communication skills and how you can bridge the gap between technical and non-technical teams.
✨Show Your Leadership Potential
As a Staff Data Engineer, you’ll be expected to act as a technical lead and mentor. Prepare to discuss any previous experiences where you’ve taken ownership of projects or guided junior team members. They want to see that you can inspire and elevate those around you!