At a Glance
- Tasks: Build and optimise data pipelines for risk analytics in a dynamic trading environment.
- Company: Join a leading firm in energy trading with a strong reputation and collaborative culture.
- Benefits: Attractive salary, comprehensive benefits, and opportunities for professional growth.
- Other info: Work in a supportive team with excellent career advancement opportunities.
- Why this job: Make an impact in the fast-paced world of data engineering and risk technology.
- Qualifications: Experience with Microsoft Fabric, Python, SQL, and data pipeline management.
The predicted salary is between 60000 - 80000 £ per year.
Full-time in West London office 5 days a week, great salary and benefits package offered. The Data Engineer with Commodity Trading and Data Risk experience will focus on building, managing, and optimizing risk analytics workflows. You will play a key role in designing and evolving data pipelines, models, and tooling that support critical risk processes, while also contributing to the broader data platform leveraging Microsoft Fabric. Working closely with the platform team, you will help develop shared infrastructure, scalable ingestion frameworks, and high-quality data products. This role sits at the intersection of data engineering, risk technology and modern data platform practices within a global, always-on trading environment.
Job Accountabilities:
- Understand risk workflows end-to-end and translate them into reliable, production-grade data pipelines and products.
- Build and maintain batch and near-real-time data ingestion pipelines from diverse sources including relational databases, REST APIs, FTP/SFTP feeds, and cloud storage.
- Contribute to the data platform delivering harmonised, governed data products that serve multiple business functions.
- Collaborate with risk, analytics, and engineering teams to productionize and maintain risk models and scripts.
- Implement best practices for code quality, testing, and release management across the data platform.
- Build and support Power BI semantic models and DirectLake datasets.
- Manage and maintain Azure DevOps pipelines for deployment, version control, and CI/CD of risk-related scripts and data workflows.
- Monitor system performance and troubleshoot issues related to data pipelines and deployments.
- Ensure proper data governance, security, and compliance standards are applied.
Required Skills & Experience:
- Hands-on experience with Microsoft Fabric, Azure data services (e.g., Synapse Analytics, Data Factory), or Databricks for large-scale data processing.
- Proficiency in Python / SQL for data engineering and scripting.
- Familiarity with risk analytics environments or financial data.
- Strong experience with Apache Spark (Spark Engine), including performance optimization and distributed data processing.
- Strong experience ingesting data from diverse sources including relational databases, REST APIs, FTP/SFTP file feeds, and cloud storage.
- Experience managing data pipelines and production workflows.
- Experience with Azure DevOps (CI/CD pipelines, repos, release management).
- Experience with version control (Git) and software development lifecycle practices.
Nice to Have:
- Experience with streaming data technologies (e.g. Kafka, Azure Event Hubs).
- Exposure to metadata-driven framework design and config-driven pipeline development.
- Knowledge of non-relational databases (e.g. MongoDB, Cosmos DB).
- Familiarity with Data Mesh principles and domain-oriented data ownership.
- Experience with monitoring/logging tools in Azure.
Data Engineer employer: Eaglecliff Recruitment
Contact Detail:
Eaglecliff Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data engineering projects, especially those involving Microsoft Fabric or Azure services. This will give potential employers a taste of what you can do.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your experience with risk analytics and data pipelines clearly and confidently.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who take that extra step!
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Microsoft Fabric, Azure data services, and Python/SQL. We want to see how your skills align with the role, so don’t be shy about showcasing relevant projects or achievements!
Craft a Compelling Cover Letter: Your cover letter is your chance to tell us why you’re the perfect fit for the Data Engineer role. Share your passion for data engineering and risk analytics, and explain how your background makes you a great match for our team.
Showcase Your Projects: If you've worked on any data pipelines or risk models, make sure to mention them! We love seeing real-world examples of your work, especially if they involve batch and near-real-time data ingestion.
Apply Through Our Website: We encourage you to apply directly through our website for a smoother application process. It helps us keep track of your application and ensures you don’t miss out on any important updates!
How to prepare for a job interview at Eaglecliff Recruitment
✨Know Your Data Inside Out
Make sure you understand the key concepts of data engineering, especially in relation to risk analytics. Brush up on your knowledge of Microsoft Fabric, Azure services, and how they integrate with data pipelines. Being able to discuss specific projects or experiences where you've used these technologies will really impress.
✨Showcase Your Problem-Solving Skills
Prepare to discuss challenges you've faced in previous roles, particularly around data ingestion and pipeline management. Think about how you optimised performance or resolved issues in your workflows. This will demonstrate your hands-on experience and ability to troubleshoot effectively.
✨Collaborate Like a Pro
Since this role involves working closely with various teams, be ready to talk about your collaborative experiences. Share examples of how you've worked with risk, analytics, or engineering teams to productionise models or scripts. Highlighting your teamwork skills can set you apart from other candidates.
✨Master the Art of Code Quality
Familiarise yourself with best practices for code quality, testing, and release management. Be prepared to discuss how you've implemented these in past projects, especially in relation to CI/CD pipelines and version control. Showing that you prioritise quality will resonate well with the interviewers.