At a Glance
- Tasks: Build and maintain data pipelines, ensuring reliable data across the business.
- Company: Join Pubity, a fast-paced company where you can shape the data function.
- Benefits: Competitive salary, hands-on experience, and opportunities for growth.
- Why this job: Be the first dedicated data engineer and make a real impact early in your career.
- Qualifications: Experience with SQL, Python, and building production data pipelines.
- Other info: Collaborative environment with a focus on innovation and problem-solving.
The predicted salary is between 36000 - 60000 £ per year.
London – 4 Days in Office
Note: We are unable to provide Visa Sponsorship for this role.
Data at Pubity moves fast. We are looking for an engineer who wants to take real ownership early in their career and become the first dedicated data hire at the company. You will help build the foundations of how data works across the business. That means creating reliable pipelines, structuring messy inputs, and building systems the team can trust to make decisions.
If you enjoy building things from the ground up, working closely with stakeholders, and developing scalable data systems as you grow into the role, this is a rare opportunity to do it.
About the Role
As our first Data Engineer, you will help shape Pubity Group's data function from the ground up. This is a hands-on role where you will design and build the first generation of our data pipelines and models. You will work closely with the Director of Special Projects and teams across Social, Commercial and Studio to ensure data is reliable, accessible and useful across the business.
You won't be expected to arrive with a fully built playbook. Instead, we want someone capable, curious and ambitious who wants to take ownership of the data layer and grow into the role as the function scales.
Key Responsibilities
- Take ownership of the company's early stage data infrastructure and help shape how it evolves.
- Build and maintain data pipelines and models in GCP using BigQuery, SQL and Python.
- Develop reliable Python pipeline code using tools such as requests and pandas.
- Build and maintain API integrations across platforms including Meta, TikTok, YouTube and X.
- Expand pipelines into internal systems such as CRM platforms, project management tools and Slack.
- Implement data validation, monitoring and quality checks to ensure reliability.
- Help establish metric definitions and consistent reporting standards across platforms.
- Schedule and orchestrate data workflows using GCP tools such as Cloud Composer.
- Work with teams across Social, Editorial, Commercial and Studio to deliver reporting and dashboards.
- Document pipelines, definitions and processes so the data function can scale properly over time.
What We're Looking For
Must Haves
- Experience building and maintaining production data pipelines.
- Strong SQL skills and experience with BigQuery or a similar warehouse.
- Strong Python skills for data pipelines and API integrations.
- Experience working with APIs and ingesting external platform data.
- Familiarity with modern data modelling tools such as dbt or Dataform.
- A mindset focused on reliability, monitoring and good engineering practices.
Important
- Experience with orchestration tools such as Cloud Composer, Airflow or similar.
- Experience connecting data models to BI platforms such as Power BI, Looker or similar.
- Ability to work with stakeholders and prioritise requests across teams.
Nice to Have
- Experience optimising queries and managing warehouse costs.
- Postgres or Supabase experience.
- Node.js scripting for integrations.
- Exposure to multiple cloud environments such as AWS or Azure.
Platforms and Pipeline Scope
You will help build and scale pipelines across:
- Meta (Facebook and Instagram) which already have integrations.
- Next platforms including TikTok, YouTube, X, Snap, LinkedIn and Google Ads.
- Internal systems such as CRM platforms, project management tools, meeting trackers and Slack.
Much of this infrastructure will be built for the first time, and you will play a key role in defining how it works.
You'll Thrive Here If
- You enjoy building things from scratch.
- You want ownership and impact early in your career.
- You like solving messy data problems and making systems reliable.
- You work well with non-technical teams and can translate data into useful outputs.
Data Engineer (Mid Level) in London employer: Pubitygroup
Contact Detail:
Pubitygroup Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Mid Level) in London
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with potential colleagues on LinkedIn. The more you engage, the better your chances of landing that Data Engineer role.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines, models, and any projects you've worked on. This will give you an edge and demonstrate your hands-on experience to potential employers.
✨Tip Number 3
Prepare for interviews by brushing up on your SQL and Python skills. Be ready to discuss your past projects and how you tackled challenges. Practice common interview questions related to data engineering to boost your confidence.
✨Tip Number 4
Apply through our website! We love seeing candidates who are genuinely interested in joining us. Tailor your application to highlight your ownership mindset and problem-solving skills, and let’s build something amazing together!
We think you need these skills to ace Data Engineer (Mid Level) in London
Some tips for your application 🫡
Show Your Passion for Data: When writing your application, let us see your enthusiasm for data engineering! Share specific examples of projects you've worked on that demonstrate your skills in building data pipelines and working with APIs. We love seeing candidates who are genuinely excited about shaping data functions.
Tailor Your CV and Cover Letter: Make sure to customise your CV and cover letter to highlight the experiences that align with our job description. Focus on your SQL and Python skills, and any relevant tools you've used like BigQuery or Cloud Composer. This helps us see how you fit into our team right from the start!
Be Clear and Concise: Keep your application clear and to the point. Use bullet points where possible to make it easy for us to read through your qualifications and experiences. We appreciate straightforward communication, especially when it comes to technical skills and achievements.
Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows us you’re serious about joining our team at Pubity!
How to prepare for a job interview at Pubitygroup
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, like SQL, BigQuery, and Python. Brush up on your knowledge of data pipelines and API integrations, as these will likely be hot topics during the interview.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific examples where you've tackled messy data problems or built reliable systems. Use the STAR method (Situation, Task, Action, Result) to structure your answers and highlight your impact.
✨Understand the Business Context
Research Pubity Group and understand how data plays a role in their operations. Be ready to discuss how you can contribute to their goals and improve their data infrastructure, showing that you’re not just a techie but also a strategic thinker.
✨Ask Insightful Questions
Prepare thoughtful questions about the team dynamics, the challenges they face with data, and how they envision the data function evolving. This shows your genuine interest in the role and helps you assess if it’s the right fit for you.