At a Glance
- Tasks: Build and maintain scalable data pipelines to support analytics in finance.
- Company: Join a leading financial services firm with a focus on innovation.
- Benefits: Competitive pay, hands-on experience, and impactful work in a dynamic environment.
- Why this job: Make a real difference in investment decision-making with your data expertise.
- Qualifications: Mid-level Data Engineer with strong Python and Apache Spark skills.
- Other info: Collaborate with diverse teams and enjoy ownership of production data workflows.
The predicted salary is between 28800 - 48000 £ per year.
Experience within the Financial Services Industry is a must. If you are a Data Engineer who enjoys building reliable, scalable data pipelines and wants your work to directly support front-office decision-making, this role offers exactly that. You will join a data engineering function working closely with investment management and front-office stakeholders, helping ensure critical financial data is delivered accurately, efficiently and at scale. This role sits at the intersection of technology, data, and the business, and is ideal for someone who enjoys ownership, delivery, and solving real-world data challenges in a regulated environment. This is a hands-on opportunity for a mid-level engineer who can contribute from day one and take responsibility for production data workflows.
Responsibilities:
- Building and maintaining end-to-end data pipelines (ETL/ELT) to support analytics and downstream use cases
- Developing scalable data solutions using Python, with a focus on maintainability and performance
- Working with Apache Spark / PySpark to process and transform large datasets
- Supporting the ingestion, transformation and validation of complex financial data
- Improving the performance, reliability and resilience of existing data workflows
- Partnering with engineering, analytics and front-office teams to understand requirements and deliver trusted data assets
- Taking ownership of data issues and seeing them through to resolution
- Contributing ideas that improve data quality, automation, and overall platform efficiency
Essential Skills & Experience:
- Commercial experience as a Data Engineer at a mid-level
- Strong Python development skills
- Hands-on experience with Apache Spark / PySpark
- Solid experience building ETL/ELT pipelines
- Background within the financial services industry (investment management experience desirable)
- Comfortable working with production systems in a regulated environment
- Able to work independently and deliver in a fast-paced setting
Nice to Have:
- Exposure to Polars
- Experience optimising Spark workloads
- Cloud data platform experience across AWS, Azure or GCP
What Makes This Role Appealing:
- You will work on data that directly supports investment and front-office functions
- You will have ownership of production pipelines, not just isolated tasks
- You will collaborate closely with both technical teams and business stakeholders
- Your work will have clear, visible impact on data quality, reliability and decision-making
- You will join a team that values pragmatic engineering, accountability and continuous improvement
Interested? Get in touch to discuss the role in more detail and what success looks like in the first few months.
Data Engineer in City of London employer: Glocomms
Contact Detail:
Glocomms Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in City of London
✨Network Like a Pro
Get out there and connect with folks in the financial services industry. Attend meetups, webinars, or even just grab a coffee with someone already in the field. Building relationships can open doors that job applications alone can't.
✨Show Off Your Skills
When you get the chance to chat with potential employers, don’t hold back! Share specific examples of your work with data pipelines and how you've tackled challenges in previous roles. This is your time to shine and show them what you can bring to the table.
✨Tailor Your Approach
Before any interview, do your homework on the company and its data needs. Tailor your conversation to highlight how your experience with Python and Apache Spark aligns with their goals. This shows you're not just another candidate; you're the right fit!
✨Apply Through Our Website
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're genuinely interested in joining our team and contributing to our mission.
We think you need these skills to ace Data Engineer in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience in the financial services industry and showcases your skills in building data pipelines. We want to see how your background aligns with the role, so don’t hold back on relevant projects!
Showcase Your Technical Skills: When writing your application, emphasise your Python development skills and any hands-on experience with Apache Spark or PySpark. We’re looking for someone who can hit the ground running, so let us know what you’ve done!
Highlight Problem-Solving Abilities: This role is all about solving real-world data challenges, so share examples of how you've tackled complex data issues in the past. We love seeing candidates who take ownership and drive solutions!
Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity. We can’t wait to hear from you!
How to prepare for a job interview at Glocomms
✨Know Your Data Pipelines
Make sure you can talk confidently about your experience building and maintaining ETL/ELT pipelines. Be ready to discuss specific projects where you've implemented these processes, especially in a financial services context.
✨Showcase Your Python Skills
Prepare to demonstrate your strong Python development skills. Think of examples where you've used Python to develop scalable data solutions, and be ready to explain how you ensure maintainability and performance in your code.
✨Familiarise with Apache Spark
Since this role involves working with Apache Spark/PySpark, brush up on your knowledge and experience with these technologies. Be prepared to discuss how you've processed and transformed large datasets in previous roles.
✨Understand the Financial Services Landscape
Given the importance of industry experience, make sure you can articulate your understanding of the financial services sector. Highlight any relevant experience you have, particularly in investment management, and how it relates to data engineering.