Junior Data Engineer (Data Science) in Leeds
Junior Data Engineer (Data Science)

Junior Data Engineer (Data Science) in Leeds

Leeds Full-Time 28800 - 48000 £ / year (est.) No home office possible
H

At a Glance

  • Tasks: Build and maintain data pipelines, develop SQL queries, and support machine learning models.
  • Company: Join Havas Media Network, a leading digital marketing agency with a collaborative culture.
  • Benefits: Enjoy a permanent role with equal opportunities and a supportive work environment.
  • Why this job: Make a meaningful impact on client campaigns while developing your data engineering skills.
  • Qualifications: Solid Python and SQL skills, with a passion for data and problem-solving.
  • Other info: Dynamic team atmosphere with opportunities for growth and learning.

The predicted salary is between 28800 - 48000 £ per year.

Reporting To: Head of Data Science

Hiring Manager: Head of Data Science

Office Location: BlokHaus West Park, Ring Rd, Leeds LS16 6QG

About Us – Havas Media Network

Havas Media Network (HMN) employs over 900 people in the UK & Ireland. We are passionate about helping our clients create more Meaningful Brands through the creation and delivery of more valuable experiences.

This role will be part of Havas Market, our performance-focused digital marketing agency.

Our values shape the way we work and define what we expect from our people:

  • Human at Heart: You will respect, empower, and support others, fostering an inclusive workplace and creating meaningful experiences.
  • Head for Rigour: You will take pride in delivering high-quality, outcome-focused work and continually strive for improvement.
  • Mind for Flair: You will embrace diversity and bold thinking to innovate and craft brilliant, unique solutions.

The Role

In this position, you will play a vital role in delivering a wide variety of projects for our clients and internal teams. You will be working as part of a team to create solutions to a range of problems – from bringing data together from multiple sources into centralised datasets, to building predictive models to drive optimisation of our clients’ digital marketing.

We are a small, highly collaborative team, and we value cloud-agnostic technical fundamentals and self-sufficiency above specific platform expertise. The following requirements reflect the skills needed to contribute immediately and integrate smoothly with our existing workflow.

Key Responsibilities

  • Assist with building and maintaining data pipelines that ingest data from external APIs into cloud data warehouses, developing custom integrations where pre-built connectors don’t exist.
  • Develop and optimise SQL queries and data transformations in BigQuery and AWS to aggregate, join, blend, clean and de-dupe data for modelling, reporting and analysis.
  • Work with senior members of the team to design and implement data models on the datasets created above for in-depth analysis and segmentation.
  • Support the deployment of containerized data solutions using Docker and Cloud Run, ensuring pipelines run reliably with appropriate error handling and monitoring.
  • Assist with configuring and maintaining CI/CD pipelines in Azure DevOps to automate testing, deployment, and infrastructure provisioning for data and ML projects.
  • Create clear technical documentation including architecture diagrams, data dictionaries, and implementation guides to enable team knowledge sharing and project handovers.
  • Participate actively in code reviews, providing constructive feedback on SQL queries, Python code, and infrastructure configurations to maintain team code quality standards.
  • Support Analytics and Business Intelligence teams by creating reusable data assets, troubleshooting data quality issues, and building datasets that enable self-service reporting.

Additional Responsibilities

  • Support implementation of statistical techniques such as time series forecasting, propensity modelling, or multi-touch attribution to build predictive models for client campaign optimisation.
  • Assist with the implementation of machine learning models into production environments with MLOps best practices including versioning, monitoring, and automated retraining workflows.
  • Participate in scoping sessions to translate client briefs and business stakeholder requirements into detailed technical specifications, delivery plans, and accurate time estimates.

Core Skills and Experience We Are Looking For

  • Solid foundational Python skills — comfortable writing scripts, working with APIs, and structuring readable code.
  • Working knowledge of SQL — able to write queries involving joins, aggregations, and filtering.
  • Practical experience with Docker — able to build and run containers locally.
  • Familiar with Git workflows — comfortable with branching, committing, and raising pull requests.
  • Understanding of CI/CD principles — aware of how automated testing and deployment pipelines work.
  • Ability to read and follow technical documentation, with a willingness to ask questions and learn how business requirements translate into technical solutions.
  • Excellent written and verbal communication skills for proactive knowledge sharing, constructive PR feedback, participating in daily standups, and documenting processes.

Beneficial skills and experience to have

  • Hands-on experience with any major cloud ML platform, focusing on MLOps workflow patterns.
  • Practical experience with stream or batch processing tools like GCP Dataflow or general orchestrators like Apache Beam.
  • Familiarity with Python ML frameworks or data modelling tools like Dataform/DBT.
  • Familiarity with the structure and core offerings of GCP or AWS.

Contract Type: Permanent

Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual’s ability to perform their job.

Junior Data Engineer (Data Science) in Leeds employer: Havas Group

Havas Media Network is an exceptional employer, offering a vibrant and inclusive work culture that prioritises collaboration and innovation. Located in Leeds, our team thrives on delivering impactful solutions while enjoying opportunities for professional growth and development. With a commitment to meaningful experiences and a diverse environment, we empower our employees to excel and make a difference in the world of digital marketing.
H

Contact Detail:

Havas Group Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Junior Data Engineer (Data Science) in Leeds

✨Tip Number 1

Network like a pro! Reach out to current employees at Havas Media Network on LinkedIn. Ask them about their experiences and any tips they might have for landing the Junior Data Engineer role. Personal connections can make a huge difference!

✨Tip Number 2

Prepare for the interview by brushing up on your Python and SQL skills. Be ready to showcase your problem-solving abilities with real-world examples. Practice coding challenges that focus on data pipelines and transformations to impress the hiring team.

✨Tip Number 3

Show off your passion for data! During interviews, share your personal projects or any relevant experience you have with cloud platforms, Docker, or CI/CD principles. This will demonstrate your commitment to continuous learning and improvement.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining the Havas team. Good luck!

We think you need these skills to ace Junior Data Engineer (Data Science) in Leeds

Python
SQL
BigQuery
AWS
Docker
Cloud Run
CI/CD
Azure DevOps
APIs
Data Modelling
MLOps
Technical Documentation
Communication Skills
Data Quality Troubleshooting
Statistical Techniques

Some tips for your application 🫡

Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Junior Data Engineer role. Highlight your Python and SQL skills, and any relevant projects you've worked on. We want to see how you can contribute to our team!

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you embody our values at Havas. Keep it concise but impactful – we love a good story!

Showcase Your Projects: If you've worked on any data-related projects, make sure to mention them! Whether it's building data pipelines or using Docker, we want to see what you've done. Include links to your GitHub or any relevant portfolios if you have them.

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy – just follow the prompts!

How to prepare for a job interview at Havas Group

✨Know Your Tech Basics

Make sure you brush up on your foundational Python skills and SQL knowledge. Be ready to discuss how you've used these in past projects, especially when it comes to writing scripts or working with APIs. This will show that you can hit the ground running!

✨Showcase Your Problem-Solving Skills

Prepare examples of how you've tackled data challenges in the past. Whether it's building data pipelines or optimising queries, having specific instances ready will demonstrate your ability to contribute to the team’s goals effectively.

✨Emphasise Collaboration

Since this role is all about teamwork, be prepared to talk about your experiences working in collaborative environments. Highlight any instances where you’ve participated in code reviews or shared knowledge with colleagues, as this aligns with their values.

✨Ask Insightful Questions

At the end of the interview, don’t forget to ask questions! Inquire about the team’s current projects or the tools they use for CI/CD. This shows your genuine interest in the role and helps you understand how you can fit into their workflow.

Junior Data Engineer (Data Science) in Leeds
Havas Group
Location: Leeds

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

H
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>