Senior Data Engineer (Data EcoSystems)
Senior Data Engineer (Data EcoSystems)

Senior Data Engineer (Data EcoSystems)

Full-Time 60000 - 80000 ÂŁ / year (est.) No home office possible
Kpler

At a Glance

  • Tasks: Design and implement data ingestion pipelines using cutting-edge technologies.
  • Company: Join Kpler, a leader in global trade intelligence with a diverse team.
  • Benefits: Competitive salary, inclusive culture, and opportunities for professional growth.
  • Why this job: Make a real impact by transforming complex data into actionable insights.
  • Qualifications: 5+ years in data engineering, strong skills in Java, Python, and SQL.
  • Other info: Dynamic work environment with a commitment to diversity and inclusion.

The predicted salary is between 60000 - 80000 ÂŁ per year.

At Kpler, we are dedicated to helping our clients navigate complex markets with ease. By simplifying global trade information and providing valuable insights, we empower organisations to make informed decisions in commodities, energy, and maritime sectors. Since our founding in 2014, we have focused on delivering top-tier intelligence through user-friendly platforms. Our team of over 700 experts from 35+ countries works tirelessly to transform intricate data into actionable strategies, ensuring our clients stay ahead in a dynamic market landscape. Join us to leverage cutting-edge innovation for impactful results and experience unparalleled support on your journey to success.

Kpler is hiring for a Senior Data Engineer to own the design, delivery, and reliability of platform capabilities that ingest and deliver data at scale. You’ll lead complex, high-impact initiatives that support critical business outcomes, shape engineering best practices across data ingestion, governance and distribution, while partnering closely with other teams to improve developer experience and platform adoption. The role also includes contributing to client‑facing solutions and data access features that enable customers to reliably consume Kpler’s data.

Key Responsibilities
  • Design and implement Snowflake ingestion pipelines in Java, sourcing data from upstream systems and delivering high-quality solutions for high-impact, ambiguous, and complex problem spaces.
  • Own the reliability and operational excellence of ingestion services, including monitoring and observability, stability, performance, scalability, and on‑call readiness, while making sound engineering trade‑offs aligned with business needs.
  • Maintain and evolve existing pipelines by identifying bottlenecks, reducing failure modes, and proactively addressing issues before they impact users or downstream systems.
  • Lead technical design for complex initiatives, driving end‑to‑end architecture and implementation while coordinating across teams and stakeholders.
  • Drive continuous improvement to developer experience for data ingestion by contributing to platform standards, improving usability and documentation, and partnering with other teams to enable adoption and consistent best practices.
  • Refactor and modernise complex systems pragmatically, balancing technical debt against delivery timelines and proposing scope changes as priorities shift.
  • Lead and debug production issues and major incidents under pressure, ensuring rapid mitigation, clear communication, and follow‑through on corrective actions.
  • Contribute to the platform roadmap and cross‑organisational technical direction by participating in architecture forums and broader engineering initiatives that strengthen the discipline.
  • Collaborate on platform‑adjacent projects as needed, including client‑facing data access and delivery solutions.
  • Partner on data governance and risk communication, working effectively with functions such as compliance and clearly articulating technical risk and trade‑offs to non‑technical stakeholders.
Skills And Experience
  • Bachelor’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience.
  • 5+ years of professional experience in software engineering and/or data engineering roles, with sustained ownership of production systems.
  • Demonstrated experience building, operating, and evolving data ingestion pipelines (batch and/or streaming) for analytics platforms.
  • Strong hands‑on proficiency in Java/Scala, Python and SQL, with solid data modelling fundamentals.
  • Proven experience in software and systems design, applying best practices for performance, testability, maintainability, and operational robustness.
  • Strong database and analytics warehouse fundamentals, including schema and table design for analytical workloads, query performance concepts, and practical experience with Snowflake or equivalent cloud data warehouses.
  • Demonstrated ownership of non‑functional requirements, making sound engineering decisions that balance system design, technical debt, reliability, observability, and business needs.
  • Solid DataOps and operational ownership mindset, including running production data services, implementing observability (metrics, logs, alerts), and improving reliability through automation and standard practices.
  • Experience working with cloud platforms (AWS), including operating data platforms and services in modern environments.
  • Exposure to deploying and operating services using Kubernetes and CI/CD pipelines, including GitOps‑style workflows with PR‑based deployments, environment promotion, and reproducible releases.
  • Track record of delivering well‑designed, maintainable systems and owning reliability and observability for production workloads.
Nice to have
  • Experience operating Snowflake at scale, including warehouse and resource sizing, cost and performance trade‑offs, workload optimisation, and operational best practices.
  • Familiarity with data governance concepts such as access control, auditability, data retention, and lineage.
  • Experience contributing to platform developer experience through tooling, documentation, and paved paths, and influencing engineering practices beyond the immediate team.
  • Experience with TypeScript and Vue.js, integrating with APIs and presenting data within end‑user workflows such as Excel add‑ins.
  • Experience with Python and boto3, and with AWS‑based client delivery patterns including S3 and sFTP‑style data distribution, relevant to data‑sharing platforms.

We make things happen. We act decisively and with purpose, going the extra mile. We build together. We foster relationships and develop creative solutions to address market challenges. We are here to help. We are accessible and supportive to colleagues and clients with a friendly approach.

Our People Pledge: Don’t meet every single requirement? Research shows that women and people of colour are less likely than others to apply if they feel like they don’t match 100% of the job requirements. Don’t let the confidence gap stand in your way, we’d love to hear from you! We understand that experience comes in many different forms and are dedicated to adding new perspectives to the team. Kpler is committed to providing a fair, inclusive and diverse work‑environment. We believe that different perspectives lead to better ideas, and better ideas allow us to better understand the needs and interests of our diverse, global community. We welcome people of different backgrounds, experiences, abilities and perspectives and are an equal opportunity employer.

By applying, I confirm that I have read and accept the Staff Privacy Notice. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analysing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Senior Data Engineer (Data EcoSystems) employer: Kpler

At Kpler, we pride ourselves on being an exceptional employer, offering a dynamic work culture that fosters innovation and collaboration among our diverse team of over 700 experts. Our commitment to employee growth is evident through continuous learning opportunities and a supportive environment that encourages creative problem-solving. Located in a vibrant city, we provide competitive benefits and a flexible work-life balance, making Kpler an ideal place for professionals seeking meaningful and rewarding careers in data engineering.
Kpler

Contact Detail:

Kpler Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Senior Data Engineer (Data EcoSystems)

✨Tip Number 1

Network like a pro! Reach out to current or former employees at Kpler on LinkedIn. A friendly chat can give you insider info and maybe even a referral, which can really boost your chances.

✨Tip Number 2

Prepare for the interview by diving deep into Kpler's products and services. Understand how they simplify global trade information and think about how your skills as a Senior Data Engineer can contribute to that mission.

✨Tip Number 3

Show off your problem-solving skills! Be ready to discuss past projects where you tackled complex data challenges. Use specific examples to demonstrate how you’ve designed and implemented effective data ingestion pipelines.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining the Kpler team.

We think you need these skills to ace Senior Data Engineer (Data EcoSystems)

Data Ingestion Pipelines
Java
Scala
Python
SQL
Snowflake
Cloud Platforms (AWS)
Kubernetes
CI/CD Pipelines
DataOps
Observability
Data Governance
Technical Design
Problem-Solving Skills
Collaboration

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter for the Senior Data Engineer role. Highlight your experience with data ingestion pipelines and any relevant technologies like Java, Snowflake, or AWS. We want to see how your skills align with what we do at Kpler!

Showcase Your Projects: Don’t just list your job responsibilities; share specific projects you’ve worked on that demonstrate your expertise in data engineering. Include details about challenges you faced and how you overcame them. This helps us understand your problem-solving skills!

Be Clear and Concise: When writing your application, keep it straightforward and to the point. Use clear language and avoid jargon unless it's relevant. We appreciate a well-structured application that makes it easy for us to see your qualifications.

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands. Plus, you’ll find all the info you need about the role and our company culture there!

How to prepare for a job interview at Kpler

✨Know Your Tech Stack

Make sure you’re well-versed in the technologies mentioned in the job description, especially Java, Snowflake, and SQL. Brush up on your data ingestion pipeline knowledge and be ready to discuss your past experiences with these tools.

✨Showcase Problem-Solving Skills

Prepare to discuss complex problems you've tackled in previous roles. Kpler values candidates who can lead high-impact initiatives, so think of specific examples where you’ve designed solutions or improved existing systems.

✨Understand the Business Context

Familiarise yourself with Kpler’s mission and the sectors they operate in. Being able to articulate how your technical skills can help clients navigate complex markets will show that you’re not just a techie but also understand the bigger picture.

✨Communicate Clearly

Practice explaining technical concepts in simple terms. You’ll likely need to communicate with non-technical stakeholders, so being able to articulate your thoughts clearly will set you apart from other candidates.

Senior Data Engineer (Data EcoSystems)
Kpler

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>