At a Glance
- Tasks: Design and maintain real-time data streaming pipelines using Apache Kafka.
- Company: Join an innovative Financial Services organisation focused on strategic data decision-making.
- Benefits: Enjoy hybrid work options and the freedom to experiment with new technologies.
- Why this job: Shape the future of scalable data pipelines and influence company-wide data strategy.
- Qualifications: 5 years in Data Engineering, strong Kafka experience, and proficiency in Python required.
- Other info: Be part of a respected team during a significant growth phase.
The predicted salary is between 60000 - 75000 £ per year.
Job Description
Data Engineer – Financial Services | Strong Kafka/Streaming Focus- Liverpool/Hybrid – Up to £75K (DOE)
Our client, an innovative and rapidly expanding Financial Services organization, is seeking a Data Engineer to join their highly technical data team. This is a unique opportunity to be part of a forward-thinking company where data is central to strategic decision-making.
We’re looking for someone who brings hands-on experience in streaming data architectures, particularly with Apache Kafka and Confluent Cloud, and is eager to shape the future of scalable, real-time data pipelines. You’ll work closely with both the core Data Engineering team and the Data Science function, bridging the gap between model development and production-grade data infrastructure.
What You’ll Do:
- Design, build, and maintain real-time data streaming pipelines using Apache Kafka and Confluent Cloud.
- Architect and implement robust, scalable data ingestion frameworks for batch and streaming use cases.
- Collaborate with stakeholders to deliver high-quality, reliable datasets to live analytical platforms and machine learning environments.
- Serve as a technical advisor on data infrastructure design across the business.
- Proactively identify improvements and contribute to evolving best practices, with freedom to experiment and implement new technologies or architectures.
- Act as a bridge between Data Engineering and Data Science, ensuring seamless integration between pipelines and model workflows.
- Support data governance, quality, and observability efforts across the data estate.
What We’re Looking For:
- 5 years of experience in a Data Engineering or related role.
- Strong experience with streaming technologies such as Kafka, Kafka Streams, and/or Confluent Cloud (must-have).
- Solid knowledge of Apache Spark and Databricks.
- Proficiency in Python for data processing and automation.
- Familiarity with NoSQL technologies (e.g., MongoDB, Cassandra, or DynamoDB).
- Exposure to machine learning pipelines or close collaboration with Data Science teams is a plus.
- A self-starter with strong analytical thinking and a “leave it better than you found it” attitude.
- Ability to operate independently and also collaborate effectively across teams.
- Strong communication skills and experience engaging with technical and non-technical stakeholders.
Why Join?
- Be part of a highly respected and technically advanced data team at the heart of a thriving business.
- Get ownership of key architecture decisions and the freedom to try new ideas.
- Play a pivotal role in scaling the company’s data capabilities during a phase of significant growth.
- Influence data strategy across business units and leave a lasting impact.
Data Engineer employer: Maxwell Bond
Contact Detail:
Maxwell Bond Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Familiarise yourself with Apache Kafka and Confluent Cloud by working on personal projects or contributing to open-source initiatives. This hands-on experience will not only enhance your skills but also demonstrate your passion for streaming data technologies.
✨Tip Number 2
Network with professionals in the data engineering field, especially those who work with financial services. Attend meetups, webinars, or conferences to connect with potential colleagues and learn about industry trends that could give you an edge.
✨Tip Number 3
Showcase your ability to bridge the gap between Data Engineering and Data Science by discussing relevant projects during interviews. Highlight any collaborative experiences you've had that involved integrating data pipelines with machine learning models.
✨Tip Number 4
Stay updated on best practices in data governance and quality management. Being knowledgeable about these areas will position you as a proactive candidate who can contribute to evolving the company's data strategies effectively.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with streaming technologies, particularly Apache Kafka and Confluent Cloud. Use specific examples of projects where you've designed or maintained data pipelines.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention how your skills align with their needs, especially your experience in data engineering and collaboration with data science teams.
Showcase Relevant Projects: If you have worked on relevant projects, describe them briefly in your application. Focus on your contributions to real-time data streaming and any improvements you implemented in previous roles.
Highlight Soft Skills: Don't forget to mention your strong communication skills and ability to work independently as well as collaboratively. These are crucial for engaging with both technical and non-technical stakeholders.
How to prepare for a job interview at Maxwell Bond
✨Showcase Your Streaming Expertise
Make sure to highlight your hands-on experience with Apache Kafka and Confluent Cloud during the interview. Be prepared to discuss specific projects where you've designed or maintained real-time data streaming pipelines, as this is a crucial requirement for the role.
✨Demonstrate Collaboration Skills
Since the role involves working closely with both Data Engineering and Data Science teams, be ready to share examples of how you've successfully collaborated with different stakeholders in the past. This will show your ability to bridge gaps and ensure seamless integration between teams.
✨Prepare for Technical Questions
Expect technical questions related to data ingestion frameworks, batch and streaming use cases, and NoSQL technologies. Brush up on your knowledge of Apache Spark, Databricks, and Python, as these are essential skills for the position.
✨Emphasise Your Problem-Solving Attitude
The company values a 'leave it better than you found it' mindset. Be ready to discuss instances where you've proactively identified improvements in data processes or contributed to best practices. This will demonstrate your initiative and commitment to continuous improvement.