At a Glance
- Tasks: Build and optimise data pipelines while tackling complex data challenges.
- Company: Established intelligence platform with a focus on innovative data solutions.
- Benefits: Hybrid working, competitive salary, and opportunities for professional growth.
- Why this job: Gain real ownership of the data lifecycle and solve intriguing data problems.
- Qualifications: Strong software engineering skills and experience with complex data systems.
- Other info: Join a dynamic team aiming to elevate technical standards.
The predicted salary is between 60000 - 80000 £ per year.
We’re working with a long-established intelligence platform hiring into a data platform engineering role focused on some genuinely complex data problems. This is not a typical backend role and not a pure data engineering position either. It sits across ETL pipelines, knowledge graphs, search systems and APIs within an event-driven AWS environment.
What you’ll be working on:
- Building and optimizing data pipelines from ingestion through to delivery
- Working with large-scale, relationship-heavy datasets (hundreds of millions of connections)
- Supporting a knowledge graph that underlies search and data products
- Improving how data flows into search indexes and APIs
- Tackling performance challenges across distributed systems
Tech environment:
- AWS
- OpenSearch / Elasticsearch
- Graph technologies (Neo4j-style)
- Event-driven architecture
What they’re looking for:
- Strong software engineering background
- Experience working with complex data systems or data pipelines
- Exposure to search, graph databases, or distributed systems
- Someone who enjoys solving data problems end‑to‑end rather than specialising narrowly
Why it’s interesting:
- Real ownership across the full data lifecycle
- Complex graph and data modelling challenges
- Opportunity to shape how a large-scale data platform evolves
- Team with strong foundations but looking to raise the technical bar
Package:
- Hybrid working (Croydon OR Manchester, 2 days per week)
Senior Data Platform Engineer in London employer: Wave Group
Contact Detail:
Wave Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Platform Engineer in London
✨Tip Number 1
Network like a pro! Reach out to people in the industry, especially those who work at companies you're interested in. A friendly chat can open doors and give you insights that a job description just can't.
✨Tip Number 2
Show off your skills! If you've got a portfolio or GitHub with projects related to data pipelines or graph technologies, make sure to share it. It’s a great way to demonstrate your hands-on experience and problem-solving abilities.
✨Tip Number 3
Prepare for technical interviews by brushing up on your knowledge of AWS, ETL processes, and distributed systems. Practise explaining complex concepts simply; it shows you really understand your stuff and can communicate effectively.
✨Tip Number 4
Don’t forget to apply through our website! We’re always on the lookout for talented individuals who can tackle complex data challenges. Plus, it gives us a chance to see your application in the best light!
We think you need these skills to ace Senior Data Platform Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Senior Data Platform Engineer role. Highlight your experience with data pipelines, AWS, and any graph technologies you've worked with. We want to see how you can tackle those complex data problems!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're excited about this role and how your background makes you a perfect fit. Don’t forget to mention your passion for solving end-to-end data challenges – we love that!
Showcase Relevant Projects: If you've worked on projects involving ETL pipelines, search systems, or distributed architectures, make sure to include them in your application. We’re keen to see real examples of how you've tackled similar challenges in the past.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates. Plus, we love seeing applications come through our own channels!
How to prepare for a job interview at Wave Group
✨Know Your Data Inside Out
Make sure you understand the complexities of data pipelines and how they interact with various systems. Brush up on your knowledge of ETL processes, especially in an AWS environment, as this will be crucial for the role.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific examples where you've tackled complex data problems. Think about situations where you improved data flow or optimised performance in distributed systems, as these experiences will resonate well with the interviewers.
✨Familiarise Yourself with Graph Technologies
Since the role involves working with graph databases like Neo4j, it’s a good idea to have a solid understanding of how these technologies work. Be ready to explain how you’ve used them in past projects or how you would approach using them in this new role.
✨Emphasise Your Full Data Lifecycle Experience
This position requires someone who enjoys end-to-end data problem-solving. Be prepared to discuss your experience across the entire data lifecycle, from ingestion to delivery, and how you can contribute to shaping the data platform's evolution.