Data Engineer

Data Engineer

Full-Time 36000 - 60000 £ / year (est.) Home office (partial)
A

At a Glance

  • Tasks: Build and design scalable data pipelines using Python and SQL.
  • Company: Join a forward-thinking company focused on innovative analytics solutions.
  • Benefits: Hybrid working, real ownership, and exposure to modern cloud and AI tools.
  • Why this job: Shape the future of data engineering with impactful projects in a greenfield environment.
  • Qualifications: Strong Python and SQL skills; experience with ETL/ELT or data pipelines.
  • Other info: Great opportunity for growth into platform and AI technologies.

The predicted salary is between 36000 - 60000 £ per year.

This is a hands-on role focused primarily on building a greenfield end-to-end analytics platform, with real ownership and scope to shape how data is engineered, deployed and scaled across the organisation. You’ll work closely with the technical lead and have the opportunity to deepen your platform and AI exposure over time — but the core focus is strong data engineering fundamentals.

The Role

  • Design and build scalable data pipelines and backend services that power analytics, internal tools and client-facing digital products.
  • This is a practical, delivery-focused position for someone who enjoys writing clean Python, solving data problems, and shipping production systems in the cloud.

Key Responsibilities

  • Design and build data pipelines using Python and SQL
  • Work with cloud infrastructure (Azure preferred; AWS/GCP welcome)
  • Deploy and manage containerised services using Docker
  • Embed data quality, validation and reliability best practices
  • Collaborate on architectural improvements as the platform evolves
  • Contribute to documentation and engineering standards

Strong Python (production-level, OOP)

Solid SQL and relational database experience (PostgreSQL preferred)

Experience building ETL/ELT or data pipelines

Comfortable working independently in a collaborative team

Desirable / Growth Areas (Not Required Day One)

  • Kubernetes or broader container orchestration
  • Infrastructure as Code (Terraform/Bicep/CloudFormation)
  • Observability and monitoring tooling
  • Exposure to analytics platforms or streaming architectures
  • Interest in ML/MLOps or applied AI systems
  • Experience in financial services or regulated environments

Why Apply?

  • Greenfield analytics platform build
  • Real ownership and technical impact
  • Exposure to modern cloud and AI tooling
  • Hybrid working in the City of London
  • Clear opportunity to broaden into platform and AI over time

Data Engineer employer: Accelero

Join a forward-thinking company that values innovation and technical excellence, offering you the chance to take ownership of a greenfield analytics platform in the vibrant City of London. With a strong emphasis on employee growth, you'll have access to hybrid working arrangements and opportunities to expand your skills in cloud and AI technologies, all within a collaborative and supportive work culture that prioritises clean coding and data integrity.
A

Contact Detail:

Accelero Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer

✨Tip Number 1

Network like a pro! Reach out to folks in the data engineering field on LinkedIn or at meetups. We can’t stress enough how personal connections can lead to job opportunities that aren’t even advertised.

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your data pipelines and projects. We love seeing practical examples of your work, especially if you’ve used Python and SQL to solve real-world problems.

✨Tip Number 3

Prepare for those interviews! Brush up on your technical knowledge and be ready to discuss your experience with cloud infrastructure and containerisation. We want to see your passion for data engineering shine through!

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we’re always on the lookout for talented individuals who are eager to make an impact.

We think you need these skills to ace Data Engineer

Python
SQL
Data Pipeline Design
Cloud Infrastructure (Azure preferred; AWS/GCP welcome)
Containerisation (Docker)
Data Quality Best Practices
Relational Database Experience (PostgreSQL preferred)
ETL/ELT Development
Collaboration Skills
OOP (Object-Oriented Programming)
Kubernetes (desirable)
Infrastructure as Code (Terraform/Bicep/CloudFormation) (desirable)
Observability and Monitoring Tooling (desirable)
Interest in ML/MLOps or Applied AI Systems (desirable)

Some tips for your application 🫡

Show Off Your Skills: Make sure to highlight your strong Python and SQL skills in your application. We want to see how you've used these languages in real projects, especially when building data pipelines or working with cloud infrastructure.

Tailor Your Application: Don’t just send a generic application! Take the time to tailor your CV and cover letter to reflect the specific requirements of the Data Engineer role. Mention any relevant experience with Azure, Docker, or ETL processes that you have.

Be Clear and Concise: When writing your application, keep it clear and to the point. We appreciate well-structured applications that are easy to read. Use bullet points where necessary to make your achievements stand out!

Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It’s the easiest way for us to track your application and ensure it reaches the right people!

How to prepare for a job interview at Accelero

✨Know Your Data Engineering Fundamentals

Make sure you brush up on your data engineering basics, especially around building scalable data pipelines and using Python and SQL. Be ready to discuss your past experiences with these technologies and how you've applied them in real-world scenarios.

✨Showcase Your Problem-Solving Skills

Prepare to talk about specific data problems you've solved in the past. Use examples that highlight your ability to write clean code and deploy production systems, especially in cloud environments like Azure or AWS. This will demonstrate your hands-on experience and practical approach.

✨Familiarise Yourself with Containerisation

Since the role involves deploying and managing containerised services using Docker, it’s a good idea to have a solid understanding of how Docker works. If you have experience with Kubernetes or Infrastructure as Code tools, be sure to mention that too!

✨Emphasise Collaboration and Documentation

This position requires working closely with a technical lead and contributing to documentation and engineering standards. Be prepared to discuss how you’ve collaborated in teams before and the importance of maintaining clear documentation in your projects.

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>