Data Engineer : Commodities Risk : Azure / Microsoft Stack in London
Data Engineer : Commodities Risk : Azure / Microsoft Stack

Data Engineer : Commodities Risk : Azure / Microsoft Stack in London

London Full-Time 60000 - 80000 £ / year (est.) No home office possible
E

At a Glance

  • Tasks: Design and optimise data pipelines for risk analytics in a fast-paced trading environment.
  • Company: Join a rapidly expanding Energy Trading & Commodities house in Kensington.
  • Benefits: Competitive salary and a friendly, dynamic work environment.
  • Other info: Non-hybrid role: 5 days a week in the office with excellent career growth opportunities.
  • Why this job: Make a real impact on critical risk processes using cutting-edge Microsoft technologies.
  • Qualifications: Experience with Azure data services, Python, SQL, and data pipeline management.

The predicted salary is between 60000 - 80000 £ per year.

This is an exciting opportunity to join a rapidly expanding Energy Trading & Commodities house right in the heart of Kensington. We are looking for a talented Data Engineer who has a real focus with risk analytic workflows – building, managing and optimising. You will play a key role in designing and evolving data pipelines, models, and tooling that support critical risk processes, while also contributing to our broader data platform leveraging Microsoft Fabric. Working closely with the platform team, you will help develop shared infrastructure, scalable ingestion frameworks, and high-quality data products. This role sits at the intersection of data engineering, risk technology and modern data platform practices within a global, always-on trading environment. It offers a competitive salary in a fast-paced and friendly work environment. This is a non-hybrid role: 5 days per week in their Kensington office.

Job Accountabilities

  • Understand risk workflows end-to-end and translate them into reliable, production-grade data pipelines and products.
  • Build and maintain batch and near-real-time data ingestion pipelines from diverse sources including relational databases, REST APIs, FTP/SFTP feeds, and cloud storage.
  • Contribute to the data platform delivering harmonised, governed data products that serve multiple business functions.
  • Collaborate with risk, analytics, and engineering teams to productionize and maintain risk models and scripts.
  • Implement best practices for code quality, testing, and release management across the data platform.
  • Build and support Power BI semantic models and DirectLake datasets.
  • Manage and maintain Azure DevOps pipelines for deployment, version control, and CI/CD of risk-related scripts and data workflows.
  • Monitor system performance and troubleshoot issues related to data pipelines and deployments.
  • Ensure proper data governance, security, and compliance standards are applied.

Required Skills & Experience

  • Hands-on experience with Microsoft Fabric, Azure data services (e.g., Synapse Analytics, Data Factory), or Databricks for large-scale data processing.
  • Proficiency in Python / SQL for data engineering and scripting.
  • Familiarity with risk analytics environments or financial data.
  • Strong experience with Apache Spark (Spark Engine), including performance optimization and distributed data processing.
  • Strong experience ingesting data from diverse sources including relational databases, REST APIs, FTP/SFTP file feeds, and cloud storage.
  • Experience managing data pipelines and production workflows.
  • Experience with Azure DevOps (CI/CD pipelines, repos, release management).
  • Experience with version control (Git) and software development lifecycle practices.

Nice to Have

  • Experience with streaming data technologies (e.g. Kafka, Azure Event Hubs).
  • Exposure to metadata-driven framework design and config-driven pipeline development.
  • Knowledge of non-relational databases (e.g. MongoDB, Cosmos DB).
  • Familiarity with Data Mesh principles and domain-oriented data ownership.
  • Experience with monitoring/logging tools in Azure.

Data Engineer : Commodities Risk : Azure / Microsoft Stack in London employer: Eaglecliff

Join a dynamic Energy Trading & Commodities house in the vibrant heart of Kensington, where you will thrive in a fast-paced and friendly work environment. As a Data Engineer, you will have the opportunity to work with cutting-edge technologies like Microsoft Fabric and Azure, while contributing to critical risk processes and data governance. With a strong emphasis on employee growth and collaboration, this role offers a competitive salary and the chance to be part of a global trading team dedicated to innovation and excellence.
E

Contact Detail:

Eaglecliff Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer : Commodities Risk : Azure / Microsoft Stack in London

✨Tip Number 1

Network like a pro! Reach out to folks in the industry on LinkedIn or at events. A friendly chat can lead to opportunities that aren’t even advertised yet.

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your data pipelines and projects. This gives potential employers a taste of what you can do, especially with Azure and Microsoft Fabric.

✨Tip Number 3

Prepare for interviews by brushing up on risk analytics and data engineering concepts. Be ready to discuss how you’ve tackled challenges in past projects, especially around data governance and compliance.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive!

We think you need these skills to ace Data Engineer : Commodities Risk : Azure / Microsoft Stack in London

Microsoft Fabric
Azure Data Services
Synapse Analytics
Data Factory
Databricks
Python
SQL
Apache Spark
Data Ingestion
Azure DevOps
CI/CD Pipelines
Version Control (Git)
Data Governance
Risk Analytics
Monitoring Tools in Azure

Some tips for your application 🫡

Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Azure, Python, and SQL, and don’t forget to mention any relevant projects or achievements that showcase your skills in risk analytics and data pipelines.

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about this role and how your background aligns with our needs. Be specific about your experience with Microsoft Fabric and data governance, and show us how you can contribute to our team.

Showcase Your Technical Skills: In your application, make sure to highlight your hands-on experience with tools like Azure DevOps and Apache Spark. We want to see how you’ve used these technologies in real-world scenarios, so don’t hold back on the details!

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows us you’re keen to join our team at StudySmarter!

How to prepare for a job interview at Eaglecliff

✨Know Your Risk Workflows

Before the interview, make sure you understand the end-to-end risk workflows relevant to the role. Be prepared to discuss how you would translate these workflows into reliable data pipelines. This shows that you’re not just technically skilled but also understand the business context.

✨Showcase Your Technical Skills

Brush up on your hands-on experience with Microsoft Fabric and Azure data services. Be ready to share specific examples of how you've used Python, SQL, or Apache Spark in past projects. Highlight any performance optimisation techniques you've implemented to demonstrate your expertise.

✨Collaboration is Key

This role involves working closely with various teams, so be prepared to discuss your experience collaborating with risk, analytics, and engineering teams. Share examples of how you’ve contributed to productionising risk models or scripts, as this will show your ability to work in a team-oriented environment.

✨Prepare for Practical Scenarios

Expect to face practical scenarios or case studies during the interview. Practice explaining how you would build and maintain data ingestion pipelines from diverse sources. This will help you demonstrate your problem-solving skills and your understanding of best practices in data governance and compliance.

Data Engineer : Commodities Risk : Azure / Microsoft Stack in London
Eaglecliff
Location: London

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>