Data Engineer in London

Data Engineer in London

London Full-Time 36000 - 60000 £ / year (est.) No home office possible
Intellect Group

At a Glance

  • Tasks: Build and optimise data pipelines using modern Python tools for financial datasets.
  • Company: Join a dynamic UK-based fintech with a focus on innovation.
  • Benefits: Hybrid work model, hands-on experience with cutting-edge tech, and real ownership of projects.
  • Why this job: Make a tangible impact on data infrastructure in the fast-paced financial sector.
  • Qualifications: Strong Python and SQL skills, with experience in data engineering and pipeline management.
  • Other info: Opportunity to work with complex datasets and grow your career in a supportive environment.

The predicted salary is between 36000 - 60000 £ per year.

Are you a Data Engineer who enjoys building production-grade pipelines, optimising performance, and working with modern Python tooling (DuckDB/Polars) on time-series datasets? I’m supporting a UK-based fintech in their search for a hands-on Python Data Engineer to help build and improve the data infrastructure powering a unified data + analytics API for financial markets participants.

You’ll sit in an engineering/analytics team and take ownership of pipelines end-to-end — from onboarding new datasets through to reliability, monitoring and data quality in production.

In this role, you’ll:

  • Build, streamline and improve ETL/data pipelines (prototype → production)
  • Ingest and normalise high-velocity time-series datasets from multiple external sources
  • Work heavily in Python with a modern stack including DuckDB and Polars (plus Parquet/PyArrow)
  • Orchestrate workflows and improve reliability (they use Temporal — similar orchestration experience is fine)
  • Improve data integrity and visibility: validations, automated checks, backfills, monitoring/alerting
  • Support downstream analytics and client-facing outputs (dashboards/PDF/Plotly — least important)

What’s in it for you?

  • Modern data stack – DuckDB/Polars + Parquet/Arrow in a genuinely hands-on environment
  • Ownership & impact – You’ll be close to the data flows and have real influence on performance and reliability
  • Market data exposure – Work with complex financial datasets (experience helpful, interest is enough)
  • Hybrid London – London preferred, with 2–3 days in the office
  • Start ASAP – Interviewing now

What my client is looking for:

  • Strong Python + SQL fundamentals (data engineering / ETL / pipeline ownership)
  • Hands-on experience with DuckDB and/or Polars (DuckDB especially valuable)
  • Experience operating pipelines in production (monitoring, backfills, incident/RCA mindset, data quality)
  • Cloud experience with demonstrable production use (Azure preferred)
  • Clear communicator, comfortable working across engineering/analytics stakeholders

Nice to have:

  • Time-series data experience (market data, telemetry, pricing, events)
  • Streaming exposure (Kafka/Event Hubs/Kinesis)
  • Experience with Temporal (or similar orchestrators like Airflow/Dagster/Prefect)
  • Any exposure to AI agents / automation tooling

Apply now!

Data Engineer in London employer: Intellect Group

Join a dynamic UK-based fintech that prioritises innovation and employee growth, offering a modern data stack and the opportunity to take ownership of impactful projects. With a hybrid work model in London, you'll collaborate closely with a talented engineering and analytics team, ensuring your contributions directly enhance data performance and reliability in the financial sector. Enjoy a supportive work culture that values clear communication and encourages continuous learning in a fast-paced environment.
Intellect Group

Contact Detail:

Intellect Group Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer in London

✨Tip Number 1

Network like a pro! Reach out to folks in the fintech space, especially those working with data engineering. Attend meetups or webinars, and don’t be shy about sliding into DMs on LinkedIn. You never know who might have the inside scoop on job openings!

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your projects, especially those involving Python, DuckDB, or Polars. Share your GitHub link when you chat with potential employers; it’s a great way to demonstrate your hands-on experience and passion for data engineering.

✨Tip Number 3

Prepare for technical interviews by brushing up on your SQL and Python fundamentals. Practice common data engineering problems and be ready to discuss your past experiences with ETL processes and pipeline ownership. Confidence is key, so get comfortable talking about your work!

✨Tip Number 4

Don’t forget to apply through our website! We’ve got some fantastic opportunities waiting for you, and applying directly can sometimes give you an edge. Plus, it shows you’re genuinely interested in joining our team!

We think you need these skills to ace Data Engineer in London

Python
SQL
ETL
Data Pipeline Ownership
DuckDB
Polars
Data Quality Monitoring
Cloud Experience (Azure preferred)
Time-Series Data Experience
Streaming Technologies (Kafka/Event Hubs/Kinesis)
Workflow Orchestration (Temporal, Airflow, Dagster, Prefect)
Data Integrity and Validation
Communication Skills

Some tips for your application 🫡

Show Off Your Python Skills: Make sure to highlight your experience with Python, especially with DuckDB and Polars. We want to see how you've used these tools in real projects, so don’t hold back on the details!

Talk About Your Pipeline Experience: We love a good story about data pipelines! Share examples of how you've built, optimised, or maintained ETL processes. This is your chance to show us your hands-on experience and problem-solving skills.

Demonstrate Your Cloud Knowledge: If you’ve worked with cloud platforms like Azure, let us know! Describe any production use cases you’ve been involved in, as this will really help us understand your background in managing data infrastructure.

Keep It Clear and Concise: When writing your application, clarity is key. Use straightforward language and structure your thoughts well. We appreciate a well-organised application that gets straight to the point, so we can see your potential at a glance!

How to prepare for a job interview at Intellect Group

✨Know Your Tech Stack

Make sure you’re well-versed in the modern tools mentioned in the job description, especially DuckDB and Polars. Brush up on your Python skills and be ready to discuss how you've used these technologies in past projects.

✨Showcase Your Pipeline Experience

Prepare specific examples of ETL processes you've built or improved. Be ready to explain the challenges you faced and how you ensured data quality and reliability in production environments.

✨Communicate Clearly

Since the role involves working with various stakeholders, practice explaining complex technical concepts in simple terms. This will demonstrate your ability to collaborate effectively across teams.

✨Demonstrate Your Problem-Solving Skills

Think of scenarios where you had to troubleshoot issues in data pipelines or improve performance. Be prepared to discuss your thought process and the steps you took to resolve these challenges.

Data Engineer in London
Intellect Group
Location: London

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>