At a Glance
- Tasks: Lead data engineering strategy and build state-of-the-art data products on Azure and Snowflake.
- Company: Dynamic firm at the forefront of data engineering in trading and finance.
- Benefits: Competitive salary, flexible working options, and opportunities for professional growth.
- Other info: Join a high-performing team and thrive in a fast-paced, innovative environment.
- Why this job: Make a real impact by transforming complex data into valuable insights and products.
- Qualifications: 12+ years in data engineering with leadership experience in regulated environments.
The predicted salary is between 80000 - 100000 £ per year.
Own the strategy and execution of best-in-class data engineering to deliver state-of-the-art data products and services at scale. Build and operate a modern estate on Azure + Snowflake, centred on event-driven architectures and high-throughput ingestion/pipelines that feed analytics, risk, and AI/ML safely and cost-effectively. Establish the standards, tooling and talent model that convert complex trading data into fast, reliable, governed, and reusable products, aligned to the firm’s semantic/knowledge-graph backbone.
Define and execute the global data engineering strategy (ingest → govern → serve → observe), aligned with enterprise architecture and governance. Standardise event patterns (Kafka/Flink), ELT (dbt/Spark/SQL), and serving layers (APIs/SQL/Graph) across regions. Build and coach high-performing squads; manage the engineering capacity plan; anticipate peaks and scale out via vetted staff-augmentation partners without lowering the bar. Run an objective skills framework, hiring rubric, and career paths; ensure global follow-the-sun support on critical flows.
Partner with Trading, Risk, Ops/Logistics, Finance/Settlement and Compliance to prioritise a value-backlog; communicate trade-offs on latency, cost and control. Align with Architecture on ontology/knowledge-graph mapping; with Governance on evidence and controls; with Platform/Operations on environments, access and DR.
Reliability: SLOs met on market-critical paths; deterministic replay proven quarterly; MTTR trending down.
Speed & Reuse: Time-to-first-value for new products reduced by >50%; adoption of golden paths/templates across squads >60%.
Cost: Unit economics (cost per product/feature/inference) visible; ≥15–25% cost-to-serve reduction through optimisation/deprecation.
Compliance: Zero critical audit findings on lineage, access, retention; automated evidence packs.
Talent & Capacity: Bench strength in core skills; surge capacity activated without quality or security regressions.
12+ years in data engineering/platform roles, 5+ years leading multi-region teams in real-time, regulated environments (ideally commodity trading/energy/financial markets).
Global Head of Data: strategy, budget, risk appetite, and executive reporting. Lead Data Solution Architect & team: domain roadmaps, solution assurance, reuse/adoption metrics. Platform Ops and Data Engineering: CI/CD, observability, identity/secrets, DR/BCP and FinOps. Data and AI Governance, Risk, Compliance & Internal Audit: model risk, evidence automation, regulatory readiness, fine-grained FinOps Enablement. Business Lines: Trading, Risk, Ops/Logistics, Asset Ops/SCADA, Finance/Settlement, Market Analysis, value mapping and SLO reviews.
Commodity depth: Power/gas, Oil, Derivatives, time-series operational dashboards.
Knowledge-graph awareness: semantic layers (entity/relationship drill-through, lineage/impact views, consistent business terms).
Advanced geospatial: Mapbox/Leaflet, tiling strategies, clustering, and projection choices for assets, routes, and weather overlays.
LLM-assisted UX: Patterns for in-workflow assistants, retrieval-augmented explanations, and safe inline summarisation of alerts/incidents.
Design for low-latency streams: Live updates, batching, and diff-only rendering for high-frequency market data.
BI engineering partnership: Custom visual specs, semantic model constraints (star/snowflake), and row-level security/RBAC considerations.
Security basics: Understanding of ABAC/RBAC, PII handling, export controls, and auditability of user actions.
Performance tuning: p95/p99 render targets, bundle hygiene, virtualisation of large grids, and caching strategies.
Quant empathy: Comfortable discussing VaR/PFE math at a conceptual level to avoid misrepresenting risk semantics.
Prototyping breadth: Interactive prototypes wired to mock APIs; ability to script lightweight data fixtures.
Change management: Training kits, walkthroughs, and adoption campaigns for front-office and operations users.
Highly numerate, rigorous, and resilient in problem-solving. Ability to prioritise, multitask, and deliver under time constraints. Strong written and verbal communication in English. Self-motivated, proactive, and detail-oriented. Comfortable working under pressure in a fast-paced environment. Excellent communication skills, ability to explain technical topics clearly. Team player with ability to collaborate across engineering, quant, and trading teams.
Data Engineer Manager in London employer: Gunvor Group
Contact Detail:
Gunvor Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer Manager in London
✨Tip Number 1
Network like a pro! Get out there and connect with folks in the industry. Attend meetups, webinars, or even just grab a coffee with someone who’s already in the data engineering space. You never know who might have the inside scoop on job openings!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your best projects, especially those involving Azure, Snowflake, or event-driven architectures. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and soft skills. Be ready to discuss your experience with high-throughput ingestion and data governance. Practice common interview questions and think about how you can demonstrate your leadership abilities.
✨Tip Number 4
Don’t forget to apply through our website! We’re always on the lookout for talented individuals who can help us deliver state-of-the-art data products. Make sure your application stands out by tailoring it to the specific role and highlighting your relevant experience.
We think you need these skills to ace Data Engineer Manager in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the specific skills and experiences that align with the Data Engineer Manager role. Highlight your expertise in Azure, Snowflake, and event-driven architectures to catch our eye!
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about data engineering and how you can contribute to our mission. Share examples of your leadership experience and how you've built high-performing teams in the past.
Showcase Your Achievements: Quantify your successes! Whether it's reducing costs or improving time-to-value for data products, we want to see the impact you've made in previous roles. Numbers speak louder than words!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands and shows us you're serious about joining the StudySmarter team!
How to prepare for a job interview at Gunvor Group
✨Know Your Data Engineering Inside Out
Make sure you’re well-versed in best-in-class data engineering practices. Brush up on your knowledge of Azure, Snowflake, and event-driven architectures. Be ready to discuss how you would build high-throughput ingestion pipelines and the tools you’d use to ensure data governance.
✨Showcase Your Leadership Skills
As a Data Engineer Manager, you’ll need to lead high-performing squads. Prepare examples of how you’ve built and coached teams in the past. Highlight your experience with capacity planning and how you’ve successfully managed resources during peak times.
✨Communicate Clearly About Trade-offs
You’ll be expected to partner with various business lines, so practice articulating trade-offs related to latency, cost, and control. Use specific examples from your previous roles to demonstrate your ability to prioritise and communicate effectively with stakeholders.
✨Prepare for Technical Questions
Expect to dive deep into technical topics like compliance, performance tuning, and security basics. Brush up on your understanding of ABAC/RBAC and be ready to discuss how you would handle PII and auditability of user actions in your projects.