At a Glance
- Tasks: Build and optimise Spark jobs in a dynamic financial services environment.
- Company: Join a renowned financial services organisation with a focus on innovation.
- Benefits: Enjoy flexible working options and a collaborative team atmosphere.
- Other info: Be part of a small, agile team with opportunities for professional growth.
- Why this job: Make a real impact by modernising data platforms with cutting-edge technology.
- Qualifications: Strong experience with Apache Spark, Kubernetes, Python or Scala, and Agile methodologies.
The predicted salary is between 60000 - 80000 £ per year.
Your new company
Working for a renowned financial services organisation.
Your new role
We are seeking a Data Engineer to support the replacement of a legacy ETL tool with a modern Apache Spark based data platform. This is a hands-on engineering role focused on building and supporting Spark jobs, with an emphasis on performance, reliability, and scalability. The role is focused on building non-performance Apache Spark jobs, with a strong emphasis on performance optimisation. Working in containerised environments using Kubernetes is a key element also as well as experience across Python/Scala and Java. The role sits within a small Agile delivery team of four engineers (two onshore and two in Shenzhen), working closely with a Senior Data Engineer. You will be responsible for development work, sprint delivery, demos, documentation, and stakeholder engagement. This position suits a mid to senior level engineer with strong Spark development experience rather than design, infrastructure, or management responsibilities.
What you'll need to succeed
- Strong hands-on experience with Apache Spark - Writing and tuning Spark jobs/PySpark development experience.
- Experienced with Airflow and SQL.
- Strong experience working in with containerised environments using Kubernetes.
- Experience with programming in Python or Scala.
- Experience with an Ops way of working, not pure development only - you know how to deploy solutions.
- Experience with OpenShift would be highly desirable!
- Experience working in an Agile way of working (Scrum, sprints, demos).
- Financial services or professional services experience background required.
What you'll get in return
Flexible working options available.
What you need to do now
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV.
Data Engineer (Spark/ Kubernetes) (Financial Services) employer: HAYS Specialist Recruitment
Contact Detail:
HAYS Specialist Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Spark/ Kubernetes) (Financial Services)
✨Tip Number 1
Network like a pro! Reach out to your connections in the financial services sector and let them know you're on the hunt for a Data Engineer role. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! If you've got experience with Apache Spark, Kubernetes, or any of the other tech mentioned, make sure to highlight that in conversations. Consider creating a portfolio or GitHub repo showcasing your projects to impress potential employers.
✨Tip Number 3
Prepare for those interviews! Brush up on your knowledge of Agile methodologies and be ready to discuss how you've used Spark and Kubernetes in past projects. Practising common interview questions can help you feel more confident when it’s time to shine.
✨Tip Number 4
Don't forget to apply through our website! We’ve got loads of opportunities waiting for talented Data Engineers like you. Plus, applying directly can sometimes give you an edge over other candidates.
We think you need these skills to ace Data Engineer (Spark/ Kubernetes) (Financial Services)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your hands-on experience with Apache Spark and Kubernetes. We want to see how your skills align with the role, so don’t be shy about showcasing your Spark jobs and any performance optimisation work you've done!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your experience in financial services makes you a great fit for us. Keep it concise but impactful!
Showcase Your Agile Experience: Since we work in an Agile environment, make sure to mention your experience with Scrum, sprints, and demos. We love seeing candidates who can thrive in a collaborative setting, so share examples of how you've contributed to team success!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy – just click 'apply now' and follow the prompts!
How to prepare for a job interview at HAYS Specialist Recruitment
✨Know Your Spark Inside Out
Make sure you brush up on your Apache Spark knowledge before the interview. Be ready to discuss your experience with writing and tuning Spark jobs, as well as any performance optimisation techniques you've used. They’ll want to see that you can not only build Spark jobs but also make them run efficiently.
✨Show Off Your Kubernetes Skills
Since working in containerised environments using Kubernetes is key for this role, be prepared to talk about your hands-on experience. Share specific examples of how you've deployed applications in Kubernetes and any challenges you faced along the way. This will show that you’re not just familiar with the technology, but that you can effectively use it in real-world scenarios.
✨Demonstrate Agile Experience
This role is part of a small Agile team, so highlight your experience with Agile methodologies like Scrum. Be ready to discuss how you've contributed to sprint planning, demos, and stakeholder engagement in previous roles. Showing that you can work collaboratively in an Agile environment will set you apart from other candidates.
✨Prepare for Technical Questions
Expect some technical questions related to Python, Scala, and SQL during the interview. Brush up on your coding skills and be ready to solve problems on the spot. Practising common coding challenges or discussing past projects where you’ve used these languages will help you feel more confident.