At a Glance
- Tasks: Own and scale our data platform, ensuring high-quality analytics and self-serve culture.
- Company: Join a leading consulting firm that empowers decision-making through data.
- Benefits: Enjoy competitive salary, health insurance, remote work flexibility, and generous annual leave.
- Why this job: Make a real impact on business decisions with cutting-edge data tools and AI innovation.
- Qualifications: Strong SQL skills, experience with dbt, and a passion for data quality.
- Other info: Be part of an early data team with room to innovate and grow.
The predicted salary is between 50000 - 70000 £ per year.
Every day, somewhere in the world, important decisions are made. Whether it is a private equity company deciding to invest millions into a business or a large corporation implementing a new strategic direction, these decisions impact employees, customers, and other stakeholders. Consulting and private equity firms come to Sapient when they need to discover knowledge to help them make great decisions and succeed in their goals. It is our mission to support them in their discovery of knowledge.
We help our clients find industry experts who can provide their knowledge via interview or survey: we curate this knowledge in a market-leading software platform; and we help clients surface knowledge they already have through expansive knowledge management.
We’re looking for an Analytics Engineer to take ownership of our data platform within our small, high impact Data & Analytics team. You’ll work closely with the Head of Data & Analytics and a Commercial Data Analyst, playing a key role across the full data lifecycle – from ingestion and modelling through to self-serve analytics and stakeholder delivery. This is a hands-on, high-ownership role. You’ll own our dbt environment and warehouse, shape how data is modelled and used across the business, and help move us towards a scalable, self-serve analytics culture. You’ll also have the opportunity to explore how AI and modern tooling can improve both internal workflows and how stakeholders interact with data.
Our data stack:
- PostgreSQL data warehouse (AWS-hosted)
- AWS DMS & Fivetran (data ingestion)
- dbt Cloud (Enterprise)
- Tableau Server
Key duties in this role will include:
- Own and scale the data platform:
- Own the day-to-day health and performance of our dbt Cloud environment and PostgreSQL warehouse
- Design, build, and maintain scalable, analytics-ready data models using best practices (testing, version control, CI/CD)
- Improve data reliability, performance, and structure as we scale
- Drive data quality and governance across the business
- Own and develop our data dictionary, making it a genuinely useful tool for the business
- Define and standardise key metrics, ensuring a clear single source of truth
- Enable non-technical teams to confidently use data through clear documentation and guidance
- Work closely with Product, Client Services, and Finance to deliver high-quality datasets and dashboards
- Translate business questions into well-structured data models and metrics
- Replace manual, Excel-heavy reporting with scalable, automated pipelines
- Support product analytics by improving tracking, modelling, and insight generation
- Collaborate with Engineering to enhance data pipelines and architecture
- Continuously improve how we build, test, and deploy data models
- Explore and apply AI tools and automation to improve productivity and the analytics experience
Why this role?
- High ownership: you’ll shape the data platform, not just contribute to it
- Real impact: your work will directly influence decision-making across the business
- Early data team: opportunity to define standards, tooling, and best practices
- Modern stack: dbt, cloud warehouse, and a strong foundation to build on
- Room to innovate: especially around AI and automation
Strong SQL skills and experience working with dbt. Experience building and maintaining analytics-ready data models in a warehouse environment. Experience with a BI tool (Tableau or similar). A strong focus on data quality, testing, and reliability. Experience owning or contributing to data infrastructure and tooling. Clear, confident communication with both technical and non-technical audiences. Self-motivated and accountable, you take ownership and follow through.
Tenure Gifts – Vouchers, extra holiday and sabbaticals for each year of employment. Health insurance through Vitality. Enjoy the flexibility of working remotely for up to 20 days each year, allowing you to tailor your work environment to your needs and embrace a change of scenery. Employee Assistance Programme – Access to a health and wellbeing service that offers personalised advice and support from specialist teams. Enhanced Maternity & Paternity pay. Annual Leave – 25 days +
Analytics Engineer (Data Platform) in London employer: proSapient
Contact Detail:
proSapient Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Analytics Engineer (Data Platform) in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data models, analytics projects, or any relevant work. This gives you a chance to demonstrate your expertise and makes you stand out from the crowd.
✨Tip Number 3
Prepare for interviews by practising common questions related to data engineering and analytics. Think about how you can relate your past experiences to the role at Sapient, especially around data quality and governance.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team.
We think you need these skills to ace Analytics Engineer (Data Platform) in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Analytics Engineer role. Highlight your SQL expertise, experience with dbt, and any work you've done with data models or BI tools like Tableau.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about data and how you can contribute to our mission. Share specific examples of how you've driven data quality and governance in previous roles.
Showcase Your Projects: If you've worked on relevant projects, whether in a professional setting or as personal endeavours, include them in your application. We love seeing how you've tackled challenges and implemented solutions in the data space.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows us you're keen on joining our team!
How to prepare for a job interview at proSapient
✨Know Your Data Stack
Familiarise yourself with the specific tools mentioned in the job description, like PostgreSQL, dbt Cloud, and Tableau. Be ready to discuss how you've used these technologies in past projects and how they can be applied to improve data reliability and performance.
✨Showcase Your Problem-Solving Skills
Prepare examples of how you've tackled complex data challenges in previous roles. Highlight your ability to translate business questions into structured data models and metrics, as this will demonstrate your analytical thinking and ownership of the data lifecycle.
✨Communicate Clearly
Practice explaining technical concepts in a way that non-technical stakeholders can understand. This is crucial for the role, as you'll need to partner with various teams. Use simple language and relatable examples to convey your points during the interview.
✨Emphasise Your Innovation Mindset
Be prepared to discuss how you can leverage AI and modern tooling to enhance workflows and stakeholder interactions. Share any innovative ideas or experiences you've had in improving data processes, as this aligns with the company's goal of fostering a self-serve data culture.