Data Engineer : Commodities Risk : Azure / Microsoft Stack
Data Engineer : Commodities Risk : Azure / Microsoft Stack

Data Engineer : Commodities Risk : Azure / Microsoft Stack

Full-Time 60000 - 80000 £ / year (est.) No home office possible
E

At a Glance

  • Tasks: Design and optimise data pipelines for risk analytics in a fast-paced trading environment.
  • Company: Join a rapidly expanding Energy Trading & Commodities house in Kensington.
  • Benefits: Competitive salary and a friendly, dynamic work environment.
  • Other info: Non-hybrid role with excellent career growth opportunities.
  • Why this job: Make an impact in data engineering while working with cutting-edge Microsoft technologies.
  • Qualifications: Experience with Azure services, Python, SQL, and data pipeline management.

The predicted salary is between 60000 - 80000 £ per year.

This is an exciting opportunity to join a rapidly expanding Energy Trading & Commodities house right in the heart of Kensington. We are looking for a talented Data Engineer who has a real focus with risk analytic workflows – building, managing and optimising. You will play a key role in designing and evolving data pipelines, models, and tooling that support critical risk processes, while also contributing to our broader data platform leveraging Microsoft Fabric. Working closely with the platform team, you will help develop shared infrastructure, scalable ingestion frameworks, and high-quality data products. This role sits at the intersection of data engineering, risk technology and modern data platform practices within a global, always-on trading environment. It offers a competitive salary in a fast-paced and friendly work environment. This is a non-hybrid role: 5 days per week in their Kensington office.

Job Accountabilities

  • Understand risk workflows end-to-end and translate them into reliable, production-grade data pipelines and products.
  • Build and maintain batch and near-real-time data ingestion pipelines from diverse sources including relational databases, REST APIs, FTP/SFTP feeds, and cloud storage.
  • Contribute to the data platform delivering harmonised, governed data products that serve multiple business functions.
  • Collaborate with risk, analytics, and engineering teams to productionize and maintain risk models and scripts.
  • Implement best practices for code quality, testing, and release management across the data platform.
  • Build and support Power BI semantic models and DirectLake datasets.
  • Manage and maintain Azure DevOps pipelines for deployment, version control, and CI/CD of risk-related scripts and data workflows.
  • Monitor system performance and troubleshoot issues related to data pipelines and deployments.
  • Ensure proper data governance, security, and compliance standards are applied.

Required Skills & Experience

  • Hands-on experience with Microsoft Fabric, Azure data services (e.g., Synapse Analytics, Data Factory), or Databricks for large-scale data processing.
  • Proficiency in Python / SQL for data engineering and scripting.
  • Familiarity with risk analytics environments or financial data.
  • Strong experience with Apache Spark (Spark Engine), including performance optimization and distributed data processing.
  • Strong experience ingesting data from diverse sources including relational databases, REST APIs, FTP/SFTP file feeds, and cloud storage.
  • Experience managing data pipelines and production workflows.
  • Experience with Azure DevOps (CI/CD pipelines, repos, release management).
  • Experience with version control (Git) and software development lifecycle practices.

Nice to Have

  • Experience with streaming data technologies (e.g. Kafka, Azure Event Hubs).
  • Exposure to metadata-driven framework design and config-driven pipeline development.
  • Knowledge of non-relational databases (e.g. MongoDB, Cosmos DB).
  • Familiarity with Data Mesh principles and domain-oriented data ownership.
  • Experience with monitoring/logging tools in Azure.

Data Engineer : Commodities Risk : Azure / Microsoft Stack employer: Eaglecliff

Join a dynamic Energy Trading & Commodities house in the vibrant heart of Kensington, where you will thrive in a fast-paced and friendly work environment. As a Data Engineer, you will have the opportunity to work with cutting-edge technologies like Microsoft Fabric and Azure, while contributing to critical risk processes and data platforms. With a strong emphasis on employee growth, collaboration, and innovation, this role offers a competitive salary and the chance to make a meaningful impact in a global trading environment.
E

Contact Detail:

Eaglecliff Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer : Commodities Risk : Azure / Microsoft Stack

✨Tip Number 1

Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your data engineering projects, especially those related to risk analytics. This will give potential employers a taste of what you can do and set you apart from the crowd.

✨Tip Number 3

Prepare for interviews by brushing up on your technical skills and understanding the company's risk workflows. Be ready to discuss how you've built and optimised data pipelines in the past – they’ll want to see your hands-on experience!

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team at StudySmarter.

We think you need these skills to ace Data Engineer : Commodities Risk : Azure / Microsoft Stack

Microsoft Fabric
Azure Data Services
Synapse Analytics
Data Factory
Databricks
Python
SQL
Apache Spark
Data Ingestion
Azure DevOps
CI/CD Pipelines
Version Control (Git)
Data Governance
Risk Analytics
Monitoring Tools in Azure

Some tips for your application 🫡

Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Microsoft Fabric, Azure data services, and any relevant risk analytics work. We want to see how your skills align with what we're looking for!

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our team. Be sure to mention specific projects or experiences that relate to the job description.

Showcase Your Technical Skills: Don’t forget to showcase your technical skills in Python, SQL, and Apache Spark. We love seeing examples of how you've built and managed data pipelines, so include any relevant projects or achievements that demonstrate your expertise.

Apply Through Our Website: We encourage you to apply through our website for a smoother application process. It helps us keep track of your application and ensures you don’t miss out on any important updates from us!

How to prepare for a job interview at Eaglecliff

✨Know Your Risk Workflows

Before the interview, make sure you understand the end-to-end risk workflows relevant to the role. Be prepared to discuss how you would translate these workflows into reliable data pipelines. This shows that you’re not just technically skilled but also understand the business context.

✨Showcase Your Technical Skills

Brush up on your hands-on experience with Microsoft Fabric and Azure data services. Be ready to share specific examples of how you've used Python, SQL, or Apache Spark in past projects. Highlight any performance optimisation techniques you've implemented to demonstrate your expertise.

✨Collaboration is Key

This role involves working closely with various teams, so be prepared to discuss your experience collaborating with risk, analytics, and engineering teams. Share examples of how you’ve contributed to productionising risk models or maintaining data pipelines, showcasing your teamwork skills.

✨Prepare for Practical Scenarios

Expect to face practical scenarios or case studies during the interview. Think about how you would manage data ingestion from diverse sources or troubleshoot issues related to data pipelines. Practising these scenarios can help you articulate your thought process clearly.

Data Engineer : Commodities Risk : Azure / Microsoft Stack
Eaglecliff

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>