Data Engineer

Data Engineer

Full-Time 36000 - 60000 £ / year (est.) No home office possible
Intellect Group

At a Glance

  • Tasks: Build and optimise data pipelines using modern Python tools for financial datasets.
  • Company: Join a dynamic UK-based fintech with a hands-on engineering culture.
  • Benefits: Hybrid work model, competitive salary, and exposure to cutting-edge data technologies.
  • Why this job: Make a real impact on data infrastructure and analytics in the finance sector.
  • Qualifications: Strong Python and SQL skills, with experience in data engineering and pipeline ownership.
  • Other info: Opportunity for career growth in a fast-paced, innovative environment.

The predicted salary is between 36000 - 60000 £ per year.

Are you a Data Engineer who enjoys building production-grade pipelines, optimising performance, and working with modern Python tooling (DuckDB/Polars) on time-series datasets? I’m supporting a UK-based fintech in their search for a hands-on Python Data Engineer to help build and improve the data infrastructure powering a unified data + analytics API for financial markets participants.

You’ll sit in an engineering/analytics team and take ownership of pipelines end-to-end — from onboarding new datasets through to reliability, monitoring and data quality in production.

In this role, you’ll:

  • Build, streamline and improve ETL/data pipelines (prototype → production)
  • Ingest and normalise high-velocity time-series datasets from multiple external sources
  • Work heavily in Python with a modern stack including DuckDB and Polars (plus Parquet/PyArrow)
  • Orchestrate workflows and improve reliability (they use Temporal — similar orchestration experience is fine)
  • Improve data integrity and visibility: validations, automated checks, backfills, monitoring/alerting
  • Support downstream analytics and client-facing outputs (dashboards/PDF/Plotly — least important)

What’s in it for you?

  • Modern data stack – DuckDB/Polars + Parquet/Arrow in a genuinely hands-on environment
  • Ownership & impact – You’ll be close to the data flows and have real influence on performance and reliability
  • Market data exposure – Work with complex financial datasets (experience helpful, interest is enough)
  • Hybrid London – London preferred, with 2–3 days in the office
  • Start ASAP – Interviewing now

What my client is looking for:

  • Strong Python + SQL fundamentals (data engineering / ETL / pipeline ownership)
  • Hands-on experience with DuckDB and/or Polars (DuckDB especially valuable)
  • Experience operating pipelines in production (monitoring, backfills, incident/RCA mindset, data quality)
  • Cloud experience with demonstrable production use (Azure preferred)
  • Clear communicator, comfortable working across engineering/analytics stakeholders

Nice to have:

  • Time-series data experience (market data, telemetry, pricing, events)
  • Streaming exposure (Kafka/Event Hubs/Kinesis)
  • Experience with Temporal (or similar orchestrators like Airflow/Dagster/Prefect)
  • Any exposure to AI agents / automation tooling

Apply now!

Data Engineer employer: Intellect Group

Join a dynamic UK-based fintech that prioritises innovation and employee growth, offering a modern data stack and the opportunity to take ownership of impactful projects. With a hybrid work model in London, you'll collaborate closely with a talented engineering and analytics team, ensuring your contributions directly enhance data performance and reliability in the financial sector. Enjoy a supportive work culture that values clear communication and encourages continuous learning in a fast-paced environment.
Intellect Group

Contact Detail:

Intellect Group Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer

✨Tip Number 1

Network like a pro! Reach out to folks in the fintech space, especially those working with data engineering. Attend meetups or webinars, and don’t be shy about sliding into DMs on LinkedIn. You never know who might have the inside scoop on job openings!

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your projects, especially those involving Python, DuckDB, or Polars. Share your GitHub link when you chat with potential employers; it’s a great way to demonstrate your hands-on experience and passion for data engineering.

✨Tip Number 3

Prepare for technical interviews by brushing up on your SQL and Python fundamentals. Practice common data engineering problems and be ready to discuss your past experiences with ETL processes and pipeline ownership. Confidence is key, so own your expertise!

✨Tip Number 4

Don’t forget to apply through our website! We’ve got some fantastic opportunities waiting for you, and applying directly can sometimes give you an edge. Plus, it shows you’re genuinely interested in joining our team!

We think you need these skills to ace Data Engineer

Python
SQL
ETL
Data Pipeline Ownership
DuckDB
Polars
Data Quality Monitoring
Cloud Experience (Azure preferred)
Time-Series Data Experience
Streaming Technologies (Kafka/Event Hubs/Kinesis)
Workflow Orchestration (Temporal, Airflow, Dagster, Prefect)
Data Integrity and Validation
Communication Skills

Some tips for your application 🫡

Tailor Your CV: Make sure your CV highlights your experience with Python, SQL, and any relevant tools like DuckDB or Polars. We want to see how your skills match the job description, so don’t be shy about showcasing your pipeline ownership and production experience!

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re excited about this role and how your background in data engineering makes you a perfect fit. We love seeing genuine enthusiasm for the fintech space and modern data stacks.

Showcase Relevant Projects: If you've worked on any projects involving time-series datasets or ETL processes, make sure to mention them! We’re keen to see examples of your hands-on experience and how you’ve tackled challenges in previous roles.

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates. Plus, we love seeing applications come in through our own channels!

How to prepare for a job interview at Intellect Group

✨Know Your Tech Stack

Make sure you’re well-versed in the modern tools mentioned in the job description, especially DuckDB and Polars. Brush up on your Python skills and be ready to discuss how you've used these technologies in past projects.

✨Showcase Your Pipeline Experience

Prepare specific examples of ETL processes you've built or improved. Be ready to explain the challenges you faced and how you ensured data quality and reliability in production environments.

✨Communicate Clearly

Since the role involves working across engineering and analytics teams, practice explaining complex technical concepts in simple terms. This will demonstrate your ability to collaborate effectively with different stakeholders.

✨Demonstrate Your Problem-Solving Skills

Think of scenarios where you had to troubleshoot issues in data pipelines or improve performance. Be prepared to discuss your thought process and the steps you took to resolve these challenges.

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>