At a Glance
- Tasks: Design and build robust data pipelines while optimising data models for analytics.
- Company: Fast-scaling InsurTech business with a focus on data innovation.
- Benefits: Hybrid working, strong team culture, and opportunities for professional growth.
- Why this job: Shape the future of data infrastructure and make a real impact.
- Qualifications: Strong Data Engineering experience, advanced SQL skills, and GCP knowledge.
- Other info: Collaborative environment with high ownership and visibility.
The predicted salary is between 70000 - 90000 £ per year.
I’m currently partnered with a fast-scaling InsurTech business that’s investing heavily in its data platform as part of a wider digital group. They’re looking for a Staff-level Data Engineer to play a key role in shaping the next phase of their data infrastructure particularly as they transition from Azure to GCP (BigQuery) over the next 12–18 months.
The Opportunity
This is a hands-on, high-impact IC role where you’ll take ownership of the data platform and help drive best practice across engineering, modelling, and data quality. You’ll be working closely with teams across the business (Product, Marketing, Pricing, Analytics), helping turn data requirements into scalable, reliable solutions.
What You’ll Be Doing
- Designing and building robust ETL/ELT pipelines
- Developing and optimising data models for analytics and reporting
- Driving improvements across data quality, performance, and scalability
- Supporting the migration to GCP / BigQuery
- Collaborating with analysts and stakeholders to deliver business-critical data solutions
- Acting as a technical lead / mentor within a lean data team
What They’re Looking For
- Strong experience in Data Engineering with advanced SQL skills
- Proven background building data pipelines and warehouse solutions
- Experience with Google Cloud Platform is essential and dbt (or similar modern data stack tools)
- Solid understanding of ETL/ELT and data modelling principles
- Comfortable working in a fast-paced, collaborative environment
- Experience with Git / CI/CD / DevOps practices (Python, APIs, and BI tooling exposure are a bonus, not essential)
The Environment
- Small, collaborative team with high ownership and visibility
- Strong backing from a well-established digital group
- A mix of start-up agility + enterprise scale
- Hybrid working: 2 days in London office + occasional travel to Fleet
If you’re a Data Engineer looking for a role where you can genuinely shape a platform, not just maintain it, this is well worth a conversation.
Staff Data Engineer - GCP in London employer: Arrows
Contact Detail:
Arrows Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Staff Data Engineer - GCP in London
✨Tip Number 1
Network like a pro! Reach out to people in the industry, especially those already working at companies you're interested in. A friendly chat can open doors and give you insider info that could help you stand out.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your data engineering projects. This gives potential employers a tangible look at what you can do, especially with GCP and data pipelines.
✨Tip Number 3
Prepare for interviews by brushing up on your SQL and data modelling principles. Be ready to discuss your past experiences and how they relate to the role. Practice common technical questions to boost your confidence!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who take the initiative to connect directly with us.
We think you need these skills to ace Staff Data Engineer - GCP in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Data Engineering, especially your skills in SQL and GCP. We want to see how your background aligns with the role, so don’t be shy about showcasing relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re excited about the opportunity to shape a data platform. We love seeing passion and enthusiasm, so let us know what drives you in this field.
Showcase Your Technical Skills: When filling out your application, be sure to mention your experience with ETL/ELT processes and any tools like dbt. We’re looking for someone who can hit the ground running, so highlight those technical skills that make you a great fit!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it makes the process smoother for everyone involved!
How to prepare for a job interview at Arrows
✨Know Your Data Stuff
Make sure you brush up on your SQL skills and be ready to discuss your experience with data pipelines and warehouse solutions. They’ll want to hear about specific projects you've worked on, especially those involving GCP and BigQuery.
✨Showcase Your Problem-Solving Skills
Prepare to talk about how you've tackled challenges in data engineering before. Think of examples where you improved data quality or performance, and be ready to explain your thought process and the impact of your solutions.
✨Collaboration is Key
Since this role involves working closely with various teams, be prepared to discuss how you’ve collaborated with product managers, analysts, or other stakeholders in the past. Highlight your communication skills and how you ensure everyone is on the same page.
✨Get Familiar with Their Tech Stack
Do some homework on the tools and technologies mentioned in the job description, like dbt and CI/CD practices. Being able to speak knowledgeably about these will show that you're genuinely interested and ready to hit the ground running.