At a Glance
- Tasks: Build and optimise data pipelines using modern Python tools for financial datasets.
- Company: Join a dynamic UK-based fintech with a focus on innovation.
- Benefits: Hybrid work model, hands-on experience with cutting-edge tech, and real ownership of projects.
- Why this job: Make a tangible impact on data infrastructure in the fast-paced financial sector.
- Qualifications: Strong Python and SQL skills, with experience in data engineering and pipeline management.
- Other info: Opportunity to work with complex datasets and grow your career in a supportive environment.
The predicted salary is between 36000 - 60000 £ per year.
Are you a Data Engineer who enjoys building production-grade pipelines, optimising performance, and working with modern Python tooling (DuckDB/Polars) on time-series datasets? I’m supporting a UK-based fintech in their search for a hands-on Python Data Engineer to help build and improve the data infrastructure powering a unified data + analytics API for financial markets participants.
You’ll sit in an engineering/analytics team and take ownership of pipelines end-to-end — from onboarding new datasets through to reliability, monitoring and data quality in production.
In this role, you’ll:
- Build, streamline and improve ETL/data pipelines (prototype → production)
- Ingest and normalise high-velocity time-series datasets from multiple external sources
- Work heavily in Python with a modern stack including DuckDB and Polars (plus Parquet/PyArrow)
- Orchestrate workflows and improve reliability (they use Temporal — similar orchestration experience is fine)
- Improve data integrity and visibility: validations, automated checks, backfills, monitoring/alerting
- Support downstream analytics and client-facing outputs (dashboards/PDF/Plotly — least important)
What’s in it for you?
- Modern data stack – DuckDB/Polars + Parquet/Arrow in a genuinely hands-on environment
- Ownership & impact – You’ll be close to the data flows and have real influence on performance and reliability
- Market data exposure – Work with complex financial datasets (experience helpful, interest is enough)
- Hybrid London – London preferred, with 2–3 days in the office
- Start ASAP – Interviewing now
What my client is looking for:
- Strong Python + SQL fundamentals (data engineering / ETL / pipeline ownership)
- Hands-on experience with DuckDB and/or Polars (DuckDB especially valuable)
- Experience operating pipelines in production (monitoring, backfills, incident/RCA mindset, data quality)
- Cloud experience with demonstrable production use (Azure preferred)
- Clear communicator, comfortable working across engineering/analytics stakeholders
Nice to have:
- Time-series data experience (market data, telemetry, pricing, events)
- Streaming exposure (Kafka/Event Hubs/Kinesis)
- Experience with Temporal (or similar orchestrators like Airflow/Dagster/Prefect)
- Any exposure to AI agents / automation tooling
Apply now!
Data Engineer in England employer: Intellect Group
Contact Detail:
Intellect Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in England
✨Tip Number 1
Network like a pro! Reach out to folks in the fintech space, especially those who work with data engineering. Attend meetups or webinars, and don’t be shy to slide into DMs on LinkedIn. You never know who might have the inside scoop on job openings!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving Python, DuckDB, or Polars. Share your GitHub link when you chat with potential employers; it’s a great way to demonstrate your hands-on experience.
✨Tip Number 3
Prepare for technical interviews by brushing up on your SQL and Python fundamentals. Practice common data engineering problems and be ready to discuss your past experiences with ETL processes and pipeline ownership. Confidence is key!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who are proactive about their job search. So, get that application in and let’s get you closer to landing that Data Engineer role!
We think you need these skills to ace Data Engineer in England
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Python, SQL, and any relevant tools like DuckDB or Polars. We want to see how your skills match the job description, so don’t be shy about showcasing your pipeline ownership and production experience!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re excited about the role and how your background makes you a great fit. We love seeing genuine enthusiasm for data engineering and financial datasets, so let that passion come through!
Showcase Relevant Projects: If you've worked on any projects involving ETL processes or time-series data, make sure to mention them. We appreciate hands-on experience, so share specific examples of how you’ve built or improved data pipelines in the past.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates. Plus, we love seeing applications come in through our own channels!
How to prepare for a job interview at Intellect Group
✨Know Your Tech Stack
Make sure you’re well-versed in the modern tools mentioned in the job description, especially DuckDB and Polars. Brush up on your Python skills and be ready to discuss how you've used these technologies in past projects.
✨Showcase Your Pipeline Experience
Prepare specific examples of ETL processes you've built or improved. Be ready to explain the challenges you faced and how you ensured data quality and reliability in production environments.
✨Communicate Clearly
Since the role involves working with various stakeholders, practice explaining complex technical concepts in simple terms. This will demonstrate your ability to collaborate effectively across teams.
✨Demonstrate Your Problem-Solving Skills
Think of scenarios where you had to troubleshoot issues in data pipelines or improve performance. Be prepared to discuss your thought process and the steps you took to resolve these challenges.