At a Glance
- Tasks: Build and maintain data pipelines, develop predictive models, and optimise marketing strategies.
- Company: Join Havas Media Network, a leading digital marketing agency with a collaborative culture.
- Benefits: Enjoy a competitive salary, flexible working options, and opportunities for professional growth.
- Why this job: Make a meaningful impact by transforming data into actionable insights for top brands.
- Qualifications: Proficiency in Python, SQL, and experience with cloud platforms required.
- Other info: Be part of a dynamic team that values innovation and continuous improvement.
The predicted salary is between 36000 - 60000 £ per year.
The Analyst Expert is responsible for placing data at the heart of our operations. S/He conducts cross analysis of complex data to monitor and optimize the performance of the marketing strategy for clients.
This role will be part of Havas Market, our performance-focused digital marketing agency. Our values shape the way we work and define what we expect from our people:
- Human at Heart: You will respect, empower, and support others, fostering an inclusive workplace and creating meaningful experiences.
- Head for Rigour: You will take pride in delivering high-quality, outcome-focused work and continually strive for improvement.
- Mind for Flair: You will embrace diversity and bold thinking to innovate and craft brilliant, unique solutions.
The Role: In this position, you will play a vital role in delivering a wide variety of projects for our clients and internal teams. You’ll be responsible for creating solutions to a range of problems – from bringing data together from multiple sources into centralised datasets, to building predictive models to drive optimisation of our clients’ digital marketing.
Key Responsibilities:
- Build and maintain data pipelines to integrate marketing platform APIs (Google Ads, Meta, TikTok, etc.) with cloud data warehouses, including custom API development where platform connectors are unavailable.
- Develop and optimize SQL queries and data transformations in BigQuery and AWS to aggregate campaign performance data, customer behavior metrics, and attribution models for reporting and analysis.
- Design and implement data models that combine first-party customer data with marketing performance data to enable cross-channel analysis and audience segmentation.
- Deploy containerized data solutions using Docker and Cloud Run, ensuring pipelines run reliably at scale with appropriate error handling and monitoring.
- Implement statistical techniques such as time series forecasting, propensity modeling, or multi-touch attribution to build predictive models for client campaign optimization.
- Develop, test, and deploy machine learning models into production environments with MLOps best practices including versioning, monitoring, and automated retraining workflows.
- Translate client briefs and business stakeholder requirements into detailed technical specifications, delivery plans, and accurate time estimates.
- Configure and maintain CI/CD pipelines in Azure DevOps to automate testing, deployment, and infrastructure provisioning for data and ML projects.
- Create clear technical documentation including architecture diagrams, data dictionaries, and implementation guides to enable team knowledge sharing and project handovers.
- Participate actively in code reviews, providing constructive feedback on SQL queries, Python code, and infrastructure configurations to maintain team code quality standards.
- Provide technical consultation to clients on topics such as data architecture design, measurement strategy, and the feasibility of proposed ML applications.
- Support Analytics and Business Intelligence teams by creating reusable data assets, troubleshooting data quality issues, and building datasets that enable self-service reporting.
- Train and mentor junior team members through pair programming, code review feedback, and guided project work on data engineering and ML workflows.
- Implement workflow orchestration using tools like Kubeflow to coordinate complex multi-step data pipelines with appropriate dependency management and retry logic.
- Stay current with developments in cloud data platforms, digital marketing measurement, and ML techniques relevant to performance marketing optimization.
- Identify and implement improvements to team infrastructure, development workflows, and data quality processes.
Core Skills and Experience We Are Looking For:
- Expert-level proficiency in Python for building robust APIs, scripting, and maintaining complex data/ML codebases.
- Strong SQL expertise and deep familiarity with data warehousing concepts relevant to tools like BigQuery.
- Practical experience with Docker and a firm grasp of the Linux to manage local devcontainers, servers, and Cloud Run deployments.
- Advanced Git proficiency and active experience participating in PR reviews to maintain code quality.
- Solid understanding of CI/CD principles and practical experience defining or managing pipelines, preferably using a tool like Azure DevOps.
- Proven ability to quickly read, understand, and apply technical documentation to translate broad business requirements into precise technical specifications.
- Excellent written and verbal communication skills for proactive knowledge sharing, constructive PR feedback, participating in daily standups, and documenting processes.
Beneficial skills and experience to have:
- Hands-on experience with any major cloud ML platform, focusing on MLOps workflow patterns.
- Practical experience with stream or batch processing tools like GCP Dataflow or general orchestrators like Apache Beam.
- Familiarity with Python ML frameworks or data modeling tools like Dataform/dbt.
- Familiarity with the structure and core offerings of GCP or AWS.
Contract Type: Permanent. Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual’s ability to perform their job.
Data Engineer (Data Science) in City of London employer: Havas SA
Contact Detail:
Havas SA Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Data Science) in City of London
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those related to data engineering and machine learning. This gives you a chance to demonstrate your expertise and makes you stand out from the crowd.
✨Tip Number 3
Prepare for interviews by practising common questions and scenarios relevant to data engineering. Think about how you can relate your past experiences to the role at Havas Market, and don’t forget to ask insightful questions about their projects!
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team at Havas Market.
We think you need these skills to ace Data Engineer (Data Science) in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV speaks directly to the role of Data Engineer. Highlight your experience with Python, SQL, and any cloud platforms you've worked with. We want to see how your skills align with our needs!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Share your passion for data engineering and how you can contribute to our mission at Havas. Be sure to mention specific projects or experiences that showcase your expertise.
Showcase Your Technical Skills: Don’t just list your skills; demonstrate them! Include examples of how you've built data pipelines, optimised queries, or implemented machine learning models. We love seeing real-world applications of your knowledge.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands. Plus, it shows us you're serious about joining our team!
How to prepare for a job interview at Havas SA
✨Know Your Data Inside Out
Make sure you’re well-versed in the data engineering concepts relevant to the role. Brush up on your SQL skills and be ready to discuss how you’ve built and maintained data pipelines in the past. Being able to talk about specific projects where you’ve integrated APIs or optimised data transformations will really impress.
✨Showcase Your Problem-Solving Skills
Prepare to discuss how you approach complex data problems. Think of examples where you’ve designed predictive models or implemented statistical techniques. Be ready to explain your thought process and the impact your solutions had on previous projects.
✨Demonstrate Collaboration and Communication
Since this role involves working closely with teams, highlight your experience in collaborative environments. Share examples of how you’ve provided technical consultation or mentored junior team members. Good communication is key, so practice explaining technical concepts in a way that’s easy to understand.
✨Stay Current with Industry Trends
Familiarise yourself with the latest developments in cloud data platforms and ML techniques. Be prepared to discuss how these trends can impact performance marketing optimisation. Showing that you’re proactive about learning will set you apart from other candidates.