At a Glance
- Tasks: Design and maintain real-time data streaming pipelines using Apache Kafka.
- Company: Join an innovative Financial Services organisation focused on strategic data decision-making.
- Benefits: Enjoy hybrid work options and the freedom to experiment with new technologies.
- Why this job: Shape the future of scalable data pipelines and influence company-wide data strategy.
- Qualifications: 5+ years in Data Engineering with strong Kafka and Python skills required.
- Other info: Be part of a respected team during a significant growth phase.
The predicted salary is between 60000 - 75000 £ per year.
Data Engineer – Financial Services | Strong Kafka/Streaming Focus- Liverpool/Hybrid – Up to £75K (DOE)
Our client, an innovative and rapidly expanding Financial Services organization, is seeking a Data Engineer to join their highly technical data team. This is a unique opportunity to be part of a forward-thinking company where data is central to strategic decision-making.
We’re looking for someone who brings hands-on experience in streaming data architectures, particularly with Apache Kafka and Confluent Cloud, and is eager to shape the future of scalable, real-time data pipelines. You’ll work closely with both the core Data Engineering team and the Data Science function, bridging the gap between model development and production-grade data infrastructure.
What You’ll Do:
- Design, build, and maintain real-time data streaming pipelines using Apache Kafka and Confluent Cloud.
- Architect and implement robust, scalable data ingestion frameworks for batch and streaming use cases.
- Collaborate with stakeholders to deliver high-quality, reliable datasets to live analytical platforms and machine learning environments.
- Serve as a technical advisor on data infrastructure design across the business.
- Proactively identify improvements and contribute to evolving best practices, with freedom to experiment and implement new technologies or architectures.
- Act as a bridge between Data Engineering and Data Science, ensuring seamless integration between pipelines and model workflows.
- Support data governance, quality, and observability efforts across the data estate.
What We’re Looking For:
- 5 + years of experience in a Data Engineering or related role.
- Strong experience with streaming technologies such as Kafka, Kafka Streams, and/or Confluent Cloud (must-have).
- Solid knowledge of Apache Spark and Databricks.
- Proficiency in Python for data processing and automation.
- Familiarity with NoSQL technologies (e.g., MongoDB, Cassandra, or DynamoDB).
- Exposure to machine learning pipelines or close collaboration with Data Science teams is a plus.
- A self-starter with strong analytical thinking and a “leave it better than you found it” attitude.
- Ability to operate independently and also collaborate effectively across teams.
- Strong communication skills and experience engaging with technical and non-technical stakeholders.
Why Join?
- Be part of a highly respected and technically advanced data team at the heart of a thriving business.
- Get ownership of key architecture decisions and the freedom to try new ideas.
- Play a pivotal role in scaling the company’s data capabilities during a phase of significant growth.
- Influence data strategy across business units and leave a lasting impact.
#J-18808-Ljbffr
Data Engineer (Kafka and Streaming) employer: Maxwell Bond
Contact Detail:
Maxwell Bond Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Kafka and Streaming)
✨Tip Number 1
Familiarise yourself with Apache Kafka and Confluent Cloud by working on personal projects or contributing to open-source initiatives. This hands-on experience will not only enhance your skills but also demonstrate your passion for streaming technologies during interviews.
✨Tip Number 2
Network with professionals in the data engineering field, especially those who have experience with Kafka and streaming data. Attend meetups, webinars, or online forums to connect with industry experts and gain insights that could give you an edge in your application.
✨Tip Number 3
Prepare to discuss real-world scenarios where you've implemented data pipelines or collaborated with data science teams. Being able to articulate your experiences and the impact of your work will showcase your ability to bridge the gap between data engineering and data science.
✨Tip Number 4
Stay updated on the latest trends and best practices in data engineering, particularly around streaming technologies. This knowledge will not only help you in interviews but also show that you're proactive about your professional development and committed to contributing to the company's growth.
We think you need these skills to ace Data Engineer (Kafka and Streaming)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Apache Kafka, Confluent Cloud, and any relevant streaming technologies. Use specific examples to demonstrate your hands-on experience in building real-time data pipelines.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention how your skills align with their needs, particularly your experience in data engineering and collaboration with data science teams.
Showcase Relevant Projects: If you have worked on projects involving data ingestion frameworks or machine learning pipelines, be sure to include these in your application. Highlight your contributions and the impact of your work.
Prepare for Technical Questions: Anticipate technical questions related to data streaming architectures and your experience with tools like Apache Spark and Python. Be ready to discuss your problem-solving approach and how you've implemented best practices in previous roles.
How to prepare for a job interview at Maxwell Bond
✨Showcase Your Kafka Expertise
Make sure to highlight your hands-on experience with Apache Kafka and Confluent Cloud during the interview. Be prepared to discuss specific projects where you've designed or maintained streaming data pipelines, as this will demonstrate your technical proficiency.
✨Bridge the Gap
Since the role involves collaboration between Data Engineering and Data Science, be ready to explain how you've successfully worked with cross-functional teams in the past. Share examples of how you facilitated communication and integration between different departments.
✨Discuss Data Governance
The company values data governance and quality, so come prepared to talk about your experience in these areas. Discuss any frameworks or best practices you've implemented to ensure data integrity and observability in your previous roles.
✨Demonstrate a Growth Mindset
Express your eagerness to learn and adapt by sharing instances where you've proactively identified improvements or experimented with new technologies. This aligns with the company's culture of innovation and will show that you're a self-starter who embraces challenges.