At a Glance
- Tasks: Build and maintain data pipelines, ensuring reliable and accessible data across the business.
- Company: Join Pubity Group as the first dedicated Data Engineer in a fast-paced environment.
- Benefits: Competitive salary, hands-on experience, and the chance to shape data infrastructure.
- Other info: Dynamic role with opportunities for growth and collaboration across teams.
- Why this job: Take ownership early in your career and make a real impact on data systems.
- Qualifications: Experience with SQL, Python, and building production data pipelines.
The predicted salary is between 40000 - 50000 £ per year.
London – 4 Days in Office
Note: We are unable to provide Visa Sponsorship for this role.
Data at Pubity moves fast. We’re looking for an engineer who wants to take real ownership early in their career and become the first dedicated data hire at the company. You will help build the foundations of how data works across the business. That means creating reliable pipelines, structuring messy inputs, and building systems the team can trust to make decisions.
About the Role
As our first Data Engineer, you will help shape Pubity Group’s data function from the ground up. This is a hands-on role where you will design and build the first generation of our data pipelines and models. You will work closely with the Director of Special Projects and teams across Social, Commercial and Studio to ensure data is reliable, accessible and useful across the business.
You won’t be expected to arrive with a fully built playbook. Instead, we want someone capable, curious and ambitious who wants to take ownership of the data layer and grow into the role as the function scales.
Key Responsibilities
- Take ownership of the company’s early stage data infrastructure and help shape how it evolves.
- Build and maintain data pipelines and models in GCP using BigQuery, SQL and Python.
- Develop reliable Python pipeline code using tools such as requests and pandas.
- Build and maintain API integrations across platforms including Meta, TikTok, YouTube and X.
- Expand pipelines into internal systems such as CRM platforms, project management tools and Slack.
- Implement data validation, monitoring and quality checks to ensure reliability.
- Help establish metric definitions and consistent reporting standards across platforms.
- Schedule and orchestrate data workflows using GCP tools such as Cloud Composer.
- Work with teams across Social, Editorial, Commercial and Studio to deliver reporting and dashboards.
- Document pipelines, definitions and processes so the data function can scale properly over time.
What We’re Looking For
Must Haves
- Experience building and maintaining production data pipelines.
- Strong SQL skills and experience with BigQuery or a similar warehouse.
- Strong Python skills for data pipelines and API integrations.
- Experience working with APIs and ingesting external platform data.
- Familiarity with modern data modelling tools such as dbt or Dataform.
- A mindset focused on reliability, monitoring and good engineering practices.
Important
- Experience with orchestration tools such as Cloud Composer, Airflow or similar.
- Experience connecting data models to BI platforms such as Power BI, Looker or similar.
- Ability to work with stakeholders and prioritise requests across teams.
Nice to Have
- Experience optimising queries and managing warehouse costs.
- Postgres or Supabase experience.
- Node.js scripting for integrations.
- Exposure to multiple cloud environments such as AWS or Azure.
Platforms and Pipeline Scope
You will help build and scale pipelines across:
- Meta (Facebook and Instagram) which already have integrations.
- Next platforms including TikTok, YouTube, X, Snap, LinkedIn and Google Ads.
- Internal systems such as CRM platforms, project management tools, meeting trackers and Slack.
Much of this infrastructure will be built for the first time, and you will play a key role in defining how it works.
You’ll Thrive Here If
- You enjoy building things from scratch.
- You want ownership and impact early in your career.
- You like solving messy data problems and making systems reliable.
- You work well with non-technical teams and can translate data into useful outputs.
Data Engineer (Mid Level) in London employer: Pubity Group
Contact Detail:
Pubity Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Mid Level) in London
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with potential colleagues on LinkedIn. The more people you know, the better your chances of landing that Data Engineer role.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines, models, and any projects you've worked on. This will give you an edge and demonstrate your hands-on experience to potential employers.
✨Tip Number 3
Prepare for interviews by brushing up on your SQL and Python skills. Be ready to discuss your past projects and how you tackled challenges. Practice common interview questions related to data engineering to boost your confidence.
✨Tip Number 4
Apply through our website! We love seeing candidates who are genuinely interested in joining us. Tailor your application to highlight your ownership mindset and eagerness to build data systems from the ground up.
We think you need these skills to ace Data Engineer (Mid Level) in London
Some tips for your application 🫡
Show Your Passion for Data: When writing your application, let us see your enthusiasm for data engineering! Share specific examples of projects you've worked on or challenges you've tackled. We love to see candidates who are genuinely excited about building data systems from the ground up.
Tailor Your CV and Cover Letter: Make sure your CV and cover letter are tailored to the role. Highlight your experience with SQL, Python, and any relevant tools like BigQuery or Cloud Composer. We want to see how your skills align with what we're looking for, so don’t hold back!
Be Clear and Concise: Keep your application clear and to the point. Use bullet points where possible to make it easy for us to read through your experience and skills. We appreciate a well-structured application that gets straight to the good stuff!
Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re serious about joining our team at Pubity!
How to prepare for a job interview at Pubity Group
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, like SQL, Python, and GCP. Brush up on your experience with BigQuery and any orchestration tools like Cloud Composer. Being able to discuss specific projects where you've used these technologies will show that you're ready to take ownership of the data infrastructure.
✨Showcase Your Problem-Solving Skills
Prepare examples of how you've tackled messy data problems in the past. Think about situations where you built reliable pipelines or improved data quality. This will demonstrate your ability to handle the challenges that come with being the first dedicated data hire at a company.
✨Understand the Business Context
Research Pubity Group and understand their business model and how data plays a role in their operations. Be ready to discuss how you can contribute to their goals by building scalable data systems that support various teams. This shows that you’re not just a techie but also someone who understands the bigger picture.
✨Ask Insightful Questions
Prepare thoughtful questions for your interviewers about the company's data strategy and future plans. Inquire about the challenges they face with their current data systems and how you can help solve them. This not only shows your interest in the role but also your proactive approach to contributing from day one.