At a Glance
- Tasks: Build and maintain scalable data pipelines to support analytics in finance.
- Company: Join a leading financial services firm with a focus on innovation.
- Benefits: Competitive contract pay, hands-on experience, and impactful work.
- Why this job: Make a real difference in investment decision-making with your data skills.
- Qualifications: Mid-level Data Engineer with strong Python and Apache Spark experience.
- Other info: Collaborative environment with opportunities for continuous improvement.
The predicted salary is between 36000 - 60000 Β£ per year.
Start Date: ASAP
Contract Duration: 6 Months
Experience within the Financial Services Industry is a must.
If you're a Data Engineer who enjoys building reliable, scalable data pipelines and wants your work to directly support front-office decision-making, this role offers exactly that. You'll join a data engineering function working closely with investment management and front-office stakeholders, helping ensure critical financial data is delivered accurately, efficiently and at scale. This role sits at the intersection of technology, data, and the business, and is ideal for someone who enjoys ownership, delivery, and solving real-world data challenges in a regulated environment. This is a hands-on opportunity for a mid-level engineer who can contribute from day one and take responsibility for production data workflows.
What You'll Be Doing
- Building and maintaining end-to-end data pipelines (ETL/ELT) to support analytics and downstream use cases
- Developing scalable data solutions using Python, with a focus on maintainability and performance
- Working with Apache Spark / PySpark to process and transform large datasets
- Supporting the ingestion, transformation and validation of complex financial data
- Improving the performance, reliability and resilience of existing data workflows
- Partnering with engineering, analytics and front-office teams to understand requirements and deliver trusted data assets
- Taking ownership of data issues and seeing them through to resolution
- Contributing ideas that improve data quality, automation, and overall platform efficiency
Skills That Will Help You Succeed
Essential
- Commercial experience as a Data Engineer at a mid-level
- Strong Python development skills
- Hands-on experience with Apache Spark / PySpark
- Solid experience building ETL/ELT pipelines
- Background within the financial services industry (investment management experience desirable)
- Comfortable working with production systems in a regulated environment
- Able to work independently and deliver in a fast-paced setting
Nice to Have
- Exposure to Polars
- Experience optimising Spark workloads
- Cloud data platform experience across AWS, Azure or GCP
What Makes This Role Appealing
- You'll work on data that directly supports investment and front-office functions
- You'll have ownership of production pipelines, not just isolated tasks
- You'll collaborate closely with both technical teams and business stakeholders
- Your work will have clear, visible impact on data quality, reliability and decision-making
- You'll join a team that values pragmatic engineering, accountability and continuous improvement
Interested? Get in touch to discuss the role in more detail and what success looks like in the first few months.
Data Engineer in London employer: Glocomms
Contact Detail:
Glocomms Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Data Engineer in London
β¨Tip Number 1
Network like a pro! Reach out to your connections in the financial services industry and let them know you're on the hunt for a Data Engineer role. You never know who might have the inside scoop on job openings or can put in a good word for you.
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your best data pipelines and projects. This is your chance to demonstrate your Python and Apache Spark expertise, so make sure itβs easy to access and visually appealing.
β¨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and problem-solving skills. Be ready to discuss how you've tackled real-world data challenges in the past, especially in regulated environments like financial services.
β¨Tip Number 4
Donβt forget to apply through our website! We love seeing candidates who are proactive and engaged. Plus, it gives us a chance to get to know you better right from the start.
We think you need these skills to ace Data Engineer in London
Some tips for your application π«‘
Tailor Your CV: Make sure your CV highlights your experience as a Data Engineer, especially in the financial services industry. We want to see how your skills in Python and Apache Spark shine through, so donβt hold back on those details!
Craft a Compelling Cover Letter: Your cover letter is your chance to tell us why youβre the perfect fit for this role. Share specific examples of your past work with data pipelines and how you've tackled real-world data challenges. Make it personal and engaging!
Showcase Your Problem-Solving Skills: In your application, highlight instances where you've taken ownership of data issues and resolved them. We love candidates who can demonstrate their ability to improve data quality and efficiency, so let us know how youβve made an impact!
Apply Through Our Website: We encourage you to apply directly through our website. Itβs the best way for us to receive your application and ensures youβre considered for the role. Plus, it shows us youβre keen to join the StudySmarter team!
How to prepare for a job interview at Glocomms
β¨Know Your Data Pipelines
Make sure you can talk confidently about your experience building and maintaining ETL/ELT pipelines. Be ready to discuss specific projects where you've implemented these processes, especially in a financial services context.
β¨Show Off Your Python Skills
Brush up on your Python development skills before the interview. Prepare to share examples of how you've used Python to develop scalable data solutions, focusing on maintainability and performance.
β¨Familiarise Yourself with Apache Spark
Since this role involves working with Apache Spark and PySpark, be prepared to discuss your hands-on experience with these technologies. Think of scenarios where you've processed large datasets and how you optimised those workflows.
β¨Understand the Financial Services Landscape
Having a background in financial services is crucial. Research the industry trends and challenges, and be ready to explain how your experience aligns with the needs of investment management and front-office functions.