At a Glance
- Tasks: Build and optimise data pipelines for risk analytics in a dynamic trading environment.
- Company: Join a leading firm in Energy Trading with a strong reputation and excellent reviews.
- Benefits: Great salary, benefits package, and opportunities for professional growth.
- Other info: Collaborative culture with a focus on innovation and career development.
- Why this job: Make an impact in the fast-paced world of data engineering and risk technology.
- Qualifications: Experience with Microsoft Fabric, Python, SQL, and data pipeline management.
The predicted salary is between 60000 - 80000 £ per year.
Full-time in West London office 5 days a week, great salary and benefits package offered. The Data Engineer with Commodity Trading and Data Risk experience will focus on building, managing, and optimizing risk analytics workflows. You will play a key role in designing and evolving data pipelines, models, and tooling that support critical risk processes, while also contributing to the broader data platform leveraging Microsoft Fabric. Working closely with the platform team, you will help develop shared infrastructure, scalable ingestion frameworks, and high-quality data products. This role sits at the intersection of data engineering, risk technology and modern data platform practices within a global, always-on trading environment.
Job Accountabilities:
- Understand risk workflows end-to-end and translate them into reliable, production-grade data pipelines and products.
- Build and maintain batch and near-real-time data ingestion pipelines from diverse sources including relational databases, REST APIs, FTP/SFTP feeds, and cloud storage.
- Contribute to the data platform delivering harmonised, governed data products that serve multiple business functions.
- Collaborate with risk, analytics, and engineering teams to productionize and maintain risk models and scripts.
- Implement best practices for code quality, testing, and release management across the data platform.
- Build and support Power BI semantic models and DirectLake datasets.
- Manage and maintain Azure DevOps pipelines for deployment, version control, and CI/CD of risk-related scripts and data workflows.
- Monitor system performance and troubleshoot issues related to data pipelines and deployments.
- Ensure proper data governance, security, and compliance standards are applied.
Required Skills & Experience:
- Hands-on experience with Microsoft Fabric, Azure data services (e.g., Synapse Analytics, Data Factory), or Databricks for large-scale data processing.
- Proficiency in Python / SQL for data engineering and scripting.
- Familiarity with risk analytics environments or financial data.
- Strong experience with Apache Spark (Spark Engine), including performance optimization and distributed data processing.
- Strong experience ingesting data from diverse sources including relational databases, REST APIs, FTP/SFTP file feeds, and cloud storage.
- Experience managing data pipelines and production workflows.
- Experience with Azure DevOps (CI/CD pipelines, repos, release management).
- Experience with version control (Git) and software development lifecycle practices.
Nice to Have:
- Experience with streaming data technologies (e.g. Kafka, Azure Event Hubs).
- Exposure to metadata-driven framework design and config-driven pipeline development.
- Knowledge of non-relational databases (e.g. MongoDB, Cosmos DB).
- Familiarity with Data Mesh principles and domain-oriented data ownership.
- Experience with monitoring/logging tools in Azure.
Data Engineer in London employer: Eaglecliff Recruitment
Contact Detail:
Eaglecliff Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data engineering projects, especially those involving Microsoft Fabric or Azure services. This will give potential employers a taste of what you can do.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your past projects and how they relate to risk analytics and data pipelines.
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who take that extra step to connect with us directly.
We think you need these skills to ace Data Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Microsoft Fabric, Azure services, and any relevant risk analytics work. We want to see how your skills match what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your background fits into our team. Keep it concise but engaging – we love a good story!
Showcase Your Projects: If you've worked on any cool projects related to data pipelines or risk analytics, make sure to mention them. We’re keen to see real examples of your work and how you’ve tackled challenges in the past.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates. Plus, we love seeing applications come in through our own channels!
How to prepare for a job interview at Eaglecliff Recruitment
✨Know Your Data Inside Out
Make sure you understand the key concepts of data engineering, especially in relation to risk analytics. Brush up on your knowledge of Microsoft Fabric, Azure services, and how they integrate with data pipelines. Being able to discuss specific projects or experiences where you've applied these technologies will really impress.
✨Showcase Your Problem-Solving Skills
Prepare to discuss challenges you've faced in previous roles, particularly around data ingestion and pipeline management. Think about how you optimised performance using Apache Spark or resolved issues in production workflows. Real-life examples will demonstrate your ability to troubleshoot and innovate.
✨Collaboration is Key
This role involves working closely with various teams, so be ready to talk about your experience collaborating with risk, analytics, and engineering teams. Highlight any successful projects where teamwork led to improved outcomes, and show that you value communication and shared goals.
✨Emphasise Best Practices
Familiarise yourself with best practices in code quality, testing, and release management. Be prepared to discuss how you've implemented CI/CD pipelines in Azure DevOps or managed version control with Git. Showing that you prioritise quality and governance will set you apart from other candidates.