At a Glance
- Tasks: Join a team to build and optimise Spark jobs for a modern data platform.
- Company: Renowned financial services organisation with a focus on innovation.
- Benefits: Flexible working options and opportunities for professional growth.
- Other info: Work in a small Agile team with excellent career development prospects.
- Why this job: Make an impact in the financial sector using cutting-edge technology.
- Qualifications: Hands-on experience with Apache Spark, Kubernetes, and programming in Python or Scala.
The predicted salary is between 55000 - 65000 £ per year.
Your new company
Working for a renowned financial services organisation.
Your new role
We are seeking a Data Engineer to support the replacement of a legacy ETL tool with a modern Apache Spark based data platform. This is a hands-on engineering role focused on building and supporting Spark jobs, with an emphasis on performance, reliability, and scalability. The role is focused on building non-performance Apache Spark jobs, with a strong emphasis on performance optimisation. You shall be running Spark workloads in containerised environments using Kubernetes and programming skillset in Python/Scala or Java is also a required skillset. The role sits within a small Agile delivery team of four engineers (two onshore and two in Shenzhen), working closely with a Senior Data Engineer. You will be responsible for development work, sprint delivery, demos, documentation, and stakeholder engagement. This position suits a mid-level engineer with strong Spark development experience rather than design, infrastructure, or management responsibilities.
What you'll need to succeed
- Strong hands-on experience with Apache Spark - Writing and tuning Spark jobs/PySpark development experience.
- Strong experience working in containerised environments using Kubernetes.
- Experience with programming in Python or Scala.
- Exposure to Big Data technologies and distributed data processing.
- Some experience using Java/Java Spring Boot for development.
- Experienced in an Ops way of working, not pure development only - you know how to deploy solutions.
- Experience with OpenShift would be highly desirable!
- Experience working in an Agile way of working (Scrum, sprints, demos).
- Financial services or professional services experience required.
What you'll get in return
Flexible working options available.
What you need to do now
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.
Data Engineer (Spark/ Kubernetes) (Financial Services) in City of London employer: Hays Technology
Contact Detail:
Hays Technology Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Spark/ Kubernetes) (Financial Services) in City of London
✨Tip Number 1
Network like a pro! Reach out to your connections in the financial services sector and let them know you're on the hunt for a Data Engineer role. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your Apache Spark projects, especially those involving Kubernetes. This will give potential employers a taste of what you can do and set you apart from the competition.
✨Tip Number 3
Prepare for interviews by brushing up on your Agile methodologies and be ready to discuss your experience with Spark jobs and containerised environments. Practise common interview questions and have examples ready that highlight your hands-on experience.
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're serious about landing that Data Engineer gig with us!
We think you need these skills to ace Data Engineer (Spark/ Kubernetes) (Financial Services) in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your hands-on experience with Apache Spark and Kubernetes. We want to see how your skills align with the role, so don’t be shy about showcasing your relevant projects!
Showcase Your Agile Experience: Since this role is all about working in an Agile environment, let us know about your experience with Scrum, sprints, and demos. We love seeing candidates who can thrive in a team setting!
Be Clear and Concise: When writing your application, keep it straightforward. We appreciate clarity, so avoid jargon and get straight to the point about your skills and experiences that relate to the job.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!
How to prepare for a job interview at Hays Technology
✨Know Your Spark Inside Out
Make sure you brush up on your Apache Spark knowledge. Be ready to discuss how you've written and tuned Spark jobs in the past. Prepare specific examples of performance optimisation you've implemented, as this will show your hands-on experience.
✨Containerisation is Key
Since the role involves working with Kubernetes, be prepared to talk about your experience in containerised environments. Have a few scenarios ready where you've successfully deployed Spark workloads using Kubernetes, highlighting any challenges you faced and how you overcame them.
✨Agile Mindset Matters
This position is within a small Agile team, so be ready to discuss your experience with Agile methodologies. Share examples of how you've contributed to sprints, demos, and stakeholder engagement. This will demonstrate that you can thrive in a collaborative environment.
✨Programming Proficiency
You’ll need to showcase your programming skills in Python, Scala, or Java. Prepare to discuss specific projects where you've used these languages, especially in relation to Big Data technologies. If you have experience with OpenShift, make sure to highlight that too!