At a Glance
- Tasks: Build and optimise Spark jobs for a modern data platform in a dynamic team.
- Company: Join a renowned financial services organisation with a strong reputation.
- Benefits: Competitive salary, flexible working options, and opportunities for professional growth.
- Other info: Collaborate in a small Agile team and engage with stakeholders for real-world impact.
- Why this job: Make an impact by transforming legacy systems into cutting-edge data solutions.
- Qualifications: Hands-on experience with Apache Spark, Kubernetes, and programming in Python or Scala.
The predicted salary is between 60000 - 80000 £ per year.
Your new company
Working for a renowned financial services organisation.
Your new role
We are seeking a Data Engineer to support the replacement of a legacy ETL tool with a modern Apache Spark based data platform. This is a hands-on engineering role focused on building and supporting Spark jobs, with an emphasis on performance, reliability, and scalability. The role is focused on building non-performance Apache Spark jobs, with a strong emphasis on performance optimisation. Working in containerised environments using Kubernetes is a key element also as well as experience across Python/Scala and Java. The role sits within a small Agile delivery team of four engineers (two onshore and two in Shenzhen), working closely with a Senior Data Engineer. You will be responsible for development work, sprint delivery, demos, documentation, and stakeholder engagement. This position suits a mid to senior level engineer with strong Spark development experience rather than design, infrastructure, or management responsibilities.
What you'll need to succeed
- Strong hands-on experience with Apache Spark - Writing and tuning Spark jobs / PySpark development experience.
- Experienced with Airflow and SQL.
- Strong experience working in with containerised environments using Kubernetes.
- Experience with programming in Python or Scala.
Data Engineer (Spark/ Kubernetes) (Financial Services) in London employer: Hays
Contact Detail:
Hays Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Spark/ Kubernetes) (Financial Services) in London
✨Tip Number 1
Network like a pro! Reach out to your connections in the financial services sector and let them know you're on the hunt for a Data Engineer role. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your best Spark projects, especially those that highlight performance optimisation. This will give potential employers a taste of what you can bring to the table.
✨Tip Number 3
Prepare for technical interviews by brushing up on your Spark and Kubernetes knowledge. Practice coding challenges and be ready to discuss your past projects in detail. We want you to shine during those interviews!
✨Tip Number 4
Don't forget to apply through our website! It's the best way to ensure your application gets noticed. Plus, we love seeing candidates who are proactive about their job search.
We think you need these skills to ace Data Engineer (Spark/ Kubernetes) (Financial Services) in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your hands-on experience with Apache Spark and Kubernetes. We want to see how your skills align with the role, so don’t be shy about showcasing your relevant projects and achievements!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re excited about this role and how your background in data engineering makes you a perfect fit. We love seeing your personality come through, so keep it engaging!
Showcase Your Technical Skills: When filling out your application, be sure to mention your experience with Python, Scala, and any tools like Airflow. We’re looking for specific examples of how you’ve used these technologies to solve problems or improve processes.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy – just follow the prompts and you’ll be all set!
How to prepare for a job interview at Hays
✨Know Your Spark Inside Out
Make sure you brush up on your Apache Spark knowledge. Be ready to discuss your experience with writing and tuning Spark jobs, as well as any performance optimisation techniques you've used. Prepare to share specific examples of how you've improved job performance in past projects.
✨Show Off Your Kubernetes Skills
Since working in containerised environments is key for this role, be prepared to talk about your experience with Kubernetes. Think of scenarios where you've deployed applications or managed containers, and be ready to explain the challenges you faced and how you overcame them.
✨Demonstrate Agile Mindset
This role is part of a small Agile team, so it's important to show that you understand Agile principles. Be ready to discuss your experience with sprints, demos, and stakeholder engagement. Share how you've collaborated with team members to deliver projects successfully.
✨Prepare for Technical Questions
Expect some technical questions related to Python, Scala, and SQL. Brush up on your coding skills and be ready to solve problems on the spot. Practising common coding challenges can help you feel more confident during the interview.