Junior Data Engineer - Build Reliable Data Pipelines (Remote) in Westminster
Junior Data Engineer - Build Reliable Data Pipelines (Remote)

Junior Data Engineer - Build Reliable Data Pipelines (Remote) in Westminster

Westminster Full-Time 30000 - 40000 £ / year (est.) No home office possible
Go Premium
Smart Communications group

At a Glance

  • Tasks: Build and maintain reliable data pipelines for analytics and reporting.
  • Company: Join a leading tech company focused on meaningful customer conversations.
  • Benefits: Competitive salary, remote work, health insurance, and 25 days holiday plus your birthday off.
  • Other info: Dynamic, flexible work environment with excellent career growth opportunities.
  • Why this job: Gain hands-on experience with modern cloud data technologies and real-world data challenges.
  • Qualifications: 1-3 years in data engineering, solid Python and SQL skills required.

The predicted salary is between 30000 - 40000 £ per year.

We are looking for a Junior Data Engineer who is excited to contribute to a growing Data Operations function within a SaaS business. In this role, you will help build and maintain reliable, high-quality data pipelines that support internal reporting, analytics, and operational needs across the company. Working closely with the Data Engineer and the wider Data Operations team, you will help develop scalable data processes, maintain trusted datasets, and support the smooth operation of our enterprise data platform. You will assist in integrating new internal data sources, improving data quality, and helping to create well-governed data assets that enable accurate reporting, meaningful insights, and future analytical work. This is an excellent opportunity for someone early in their data engineering career who wants hands-on experience with modern cloud data technologies, solid engineering best practices, and real-world data challenges. You will grow your technical skills while contributing to initiatives that help teams across the business make informed, data-driven decisions.

The responsibilities of the role include:

  • Build and maintain reliable data pipelines that load and transform data within our enterprise data lake.
  • Develop and maintain data transformations using PySpark and SQL to support internal reporting and analytics needs.
  • Ingest and integrate data from internal and approved external sources using a variety of integration patterns and APIs.
  • Apply and monitor data quality checks to help ensure accuracy, completeness, and consistency across datasets.
  • Support day-to-day data operations, including job monitoring, incident triage, reruns, backfills, and general platform maintenance.
  • Assist with root-cause analysis and post-incident improvements to enhance pipeline reliability.
  • Collaborate with stakeholders to gather clear data and reporting requirements, document outcomes, and translate needs into well-defined technical tasks.
  • Record and manage work items in JIRA, ensuring tasks are well-organized, up-to-date, and clearly documented.
  • Contribute to data governance activities, including data cataloging, metadata upkeep, documentation, and adherence to security and access standards.
  • Maintain clear technical documentation for data pipelines, workflows, and operational procedures.
  • Work with the Data Engineer and wider Data Operations team to support analytics, reporting initiatives, and internal data consumers.

What we are looking for:

Must have skills and experience:

  • 1–3 years' experience in data engineering, analytics engineering, or a related role.
  • Bachelor's degree (or equivalent experience) in computer science, data & analytics, or a related discipline.
  • Solid knowledge of Python and SQL.
  • Hands-on experience building or maintaining data pipelines.
  • Practical experience with PySpark (DataFrames and Spark SQL).
  • Familiarity with cloud storage concepts (e.g. Amazon S3).
  • Experience working with data tables and basic data modelling concepts.
  • Experience writing technical documentation.
  • Strong organizational and time management skills.
  • Strong problem-solving skills and attention to detail.
  • Eagerness to learn and grow in the field of data engineering.

Advantageous skills/experience:

  • Experience working with Databricks and Delta Lake.
  • Experience integrating data via REST APIs.
  • Exposure to data quality frameworks or validation techniques.
  • Basic understanding of data governance or access control concepts.
  • Experience with AWS-based data platforms.
  • Familiarity with BI and reporting tools such as Power BI, Tableau, or AWS QuickSight.

We look for the following SMART values in everyone we hire at Smart Communications:

  • Speak Openly - We are positive, creative, helpful, kind and we have fun. We listen and provide constructive feedback. Through meaningful conversations we encourage each other to be the best that we can be. We are not complainers we are problem solvers.
  • Make a Difference - We focus on the things that matter and prioritize the things that have the greatest impact. We celebrate success and hold ourselves accountable for our choices. We do not sit on the sidelines.
  • Agile & Flexible - We are focused on evolving, improving and growing. We think differently and challenge the status quo with open minds. We ask 'why?' so that we can help remove complexity. We do not allow hurdles to get in our way.
  • Results-Focused - We get stuff done by being efficient, working at pace and paying attention to detail. We focus on finding solutions and fixing things. We do not believe in being busy for the sake of being busy, we focus on productivity.
  • Teamwork - We are stronger and better together. We collaborate, trust and support each other to deliver results for our company and our customers. We do not want anyone to feel disengaged, we are in this together!

What’s the deal?

We will provide you with the tools, equipment and support to give you the best possible chance of success and over-achieving your goals. Salary will depend on your experience and will be highly competitive. All our packages include an annual bonus based on the company's performance, so we are all incentivised to over-achieve! In addition to a friendly, flexible and fun working environment, we provide a range of other benefits, including extensive health insurance, income protection, life assurance, subsidised gym membership, leisure travel insurance, pension contribution, Cycle2Work and childcare vouchers, as well as 25 days' holiday allowance plus an additional day off for your birthday! Located in Covent Garden, our offices are comfortable, flexible, and are always stocked with free beverages and fresh fruit. This role is fully remote.

So, if we interest you, please let us know by applying for this position and tell us all about yourself.

Please note: we only consider applicants with current legal right to work in the countries in which our positions are based. All qualified applicants will receive consideration for employment regardless of race, colour, religion, sex, national origin, sexual orientation, age, disability, marital status or gender identity.

Junior Data Engineer - Build Reliable Data Pipelines (Remote) in Westminster employer: Smart Communications group

Smart Communications is an exceptional employer that fosters a collaborative and innovative work culture, perfect for aspiring data engineers. With a focus on employee growth, you will gain hands-on experience with cutting-edge cloud technologies while contributing to impactful projects that drive data-driven decisions across the company. Enjoy a competitive salary, extensive benefits, and a flexible remote working environment, all while being part of a team that values open communication and teamwork.
Smart Communications group

Contact Detail:

Smart Communications group Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Junior Data Engineer - Build Reliable Data Pipelines (Remote) in Westminster

✨Tip Number 1

Network like a pro! Reach out to folks in the data engineering field on LinkedIn or at local meetups. You never know who might have a lead on that perfect Junior Data Engineer role.

✨Tip Number 2

Show off your skills! Create a GitHub repository with projects showcasing your data pipeline work. This gives potential employers a taste of what you can do and sets you apart from the crowd.

✨Tip Number 3

Prepare for interviews by brushing up on common data engineering questions. Practice explaining your thought process when building data pipelines, as this will demonstrate your problem-solving skills.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love hearing from passionate candidates like you!

We think you need these skills to ace Junior Data Engineer - Build Reliable Data Pipelines (Remote) in Westminster

Data Engineering
Python
SQL
PySpark
Data Pipeline Development
Data Integration
Data Quality Checks
Root Cause Analysis
Technical Documentation
Cloud Storage Concepts
Data Modelling
Organisational Skills
Time Management
Problem-Solving Skills
Eagerness to Learn

Some tips for your application 🫡

Show Your Passion for Data: When writing your application, let us see your excitement for data engineering! Share any projects or experiences that highlight your enthusiasm for building reliable data pipelines and working with data technologies.

Tailor Your Application: Make sure to customise your CV and cover letter to match the job description. Highlight your experience with Python, SQL, and any relevant tools like PySpark. We want to see how your skills align with what we're looking for!

Be Clear and Concise: Keep your application straightforward and to the point. Use bullet points where possible to make it easy for us to read through your qualifications and experiences. We appreciate clarity and organisation!

Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to receive your details and ensures you’re considered for the role. We can’t wait to hear from you!

How to prepare for a job interview at Smart Communications group

✨Know Your Data Tools

Make sure you brush up on your knowledge of Python, SQL, and PySpark before the interview. Be ready to discuss how you've used these tools in past projects or coursework, as they'll be crucial for building those reliable data pipelines.

✨Show Your Problem-Solving Skills

Prepare to share examples of how you've tackled data-related challenges. Think about specific instances where you identified a problem, implemented a solution, and what the outcome was. This will demonstrate your analytical thinking and attention to detail.

✨Understand the Company’s Values

Familiarise yourself with Smart Communications' SMART values. Be prepared to discuss how you embody these values in your work. Showing that you align with their culture can set you apart from other candidates.

✨Ask Insightful Questions

Prepare thoughtful questions about the role and the team. Inquire about the current data challenges they face or how they measure success in the Data Operations function. This shows your genuine interest in the position and helps you assess if it’s the right fit for you.

Junior Data Engineer - Build Reliable Data Pipelines (Remote) in Westminster
Smart Communications group
Location: Westminster
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>