At a Glance
- Tasks: Build and optimise data pipelines for risk analytics in a dynamic trading environment.
- Company: Join a leading firm in Energy Trading with a strong reputation and excellent reviews.
- Benefits: Great salary, benefits package, and opportunities for professional growth.
- Other info: Collaborative culture with a focus on innovation and career development.
- Why this job: Make an impact in the fast-paced world of data engineering and risk technology.
- Qualifications: Experience with Microsoft Fabric, Python, SQL, and data pipeline management.
The predicted salary is between 60000 - 80000 £ per year.
Full-time in West London office 5 days a week, great salary and benefits package offered. The Data Engineer with Commodity Trading and Data Risk experience will focus on building, managing, and optimizing risk analytics workflows. You will play a key role in designing and evolving data pipelines, models, and tooling that support critical risk processes, while also contributing to the broader data platform leveraging Microsoft Fabric. Working closely with the platform team, you will help develop shared infrastructure, scalable ingestion frameworks, and high-quality data products. This role sits at the intersection of data engineering, risk technology and modern data platform practices within a global, always-on trading environment.
Job Accountabilities:
- Understand risk workflows end-to-end and translate them into reliable, production-grade data pipelines and products.
- Build and maintain batch and near-real-time data ingestion pipelines from diverse sources including relational databases, REST APIs, FTP/SFTP feeds, and cloud storage.
- Contribute to the data platform delivering harmonised, governed data products that serve multiple business functions.
- Collaborate with risk, analytics, and engineering teams to productionize and maintain risk models and scripts.
- Implement best practices for code quality, testing, and release management across the data platform.
- Build and support Power BI semantic models and DirectLake datasets.
- Manage and maintain Azure DevOps pipelines for deployment, version control, and CI/CD of risk-related scripts and data workflows.
- Monitor system performance and troubleshoot issues related to data pipelines and deployments.
- Ensure proper data governance, security, and compliance standards are applied.
Required Skills & Experience:
- Hands-on experience with Microsoft Fabric, Azure data services (e.g., Synapse Analytics, Data Factory), or Databricks for large-scale data processing.
- Proficiency in Python / SQL for data engineering and scripting.
- Familiarity with risk analytics environments or financial data.
- Strong experience with Apache Spark (Spark Engine), including performance optimization and distributed data processing.
- Strong experience ingesting data from diverse sources including relational databases, REST APIs, FTP/SFTP file feeds, and cloud storage.
- Experience managing data pipelines and production workflows.
- Experience with Azure DevOps (CI/CD pipelines, repos, release management).
- Experience with version control (Git) and software development lifecycle practices.
Nice to Have:
- Experience with streaming data technologies (e.g. Kafka, Azure Event Hubs).
- Exposure to metadata-driven framework design and config-driven pipeline development.
- Knowledge of non-relational databases (e.g. MongoDB, Cosmos DB).
- Familiarity with Data Mesh principles and domain-oriented data ownership.
- Experience with monitoring/logging tools in Azure.
Data Engineer in Slough employer: Eaglecliff Recruitment
Contact Detail:
Eaglecliff Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in Slough
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data engineering projects, especially those involving Microsoft Fabric or Azure services. This will give potential employers a taste of what you can do.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your past projects and how they relate to risk analytics and data pipelines.
✨Tip Number 4
Apply through our website! We make it easy for you to find roles that match your skills. Plus, it shows you're serious about joining our team and helps us get to know you better.
We think you need these skills to ace Data Engineer in Slough
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Microsoft Fabric, Azure data services, and any relevant risk analytics work. We want to see how your skills match what we're looking for!
Showcase Your Projects: Include specific projects where you've built or managed data pipelines. If you've worked with batch or near-real-time ingestion, let us know! This helps us understand your hands-on experience and how you can contribute to our team.
Be Clear and Concise: When writing your application, keep it clear and to the point. Use bullet points for key achievements and avoid jargon unless it's relevant. We appreciate straightforward communication that gets to the heart of your experience.
Apply Through Our Website: Don't forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it makes the process smoother for everyone involved.
How to prepare for a job interview at Eaglecliff Recruitment
✨Know Your Data Inside Out
Make sure you understand the key concepts of data engineering, especially in relation to risk analytics. Brush up on your knowledge of Microsoft Fabric, Azure services, and how they integrate with data pipelines. Being able to discuss specific projects or experiences where you've applied these technologies will really impress.
✨Showcase Your Problem-Solving Skills
Prepare to discuss challenges you've faced in previous roles, particularly around data ingestion and pipeline management. Think about how you optimised performance using Apache Spark or resolved issues in production workflows. Real-life examples will demonstrate your ability to troubleshoot and innovate.
✨Collaboration is Key
This role involves working closely with various teams, so be ready to talk about your experience collaborating with risk, analytics, and engineering teams. Highlight any successful projects where teamwork led to better outcomes, and show that you value communication and shared goals.
✨Emphasise Best Practices
Familiarise yourself with best practices in code quality, testing, and release management. Be prepared to discuss how you've implemented these in past roles, especially in relation to CI/CD pipelines and version control with Git. This shows you're not just a coder, but someone who cares about the overall quality and governance of data products.