At a Glance
- Tasks: Build and optimise Spark pipelines on Databricks for a high-performance trading environment.
- Company: Join a leading firm in the finance sector with a focus on innovation.
- Benefits: Competitive salary, flexible working hours, and opportunities for professional growth.
- Other info: Work in a fast-paced environment with excellent career advancement opportunities.
- Why this job: Make an impact by developing cutting-edge data systems in a dynamic team.
- Qualifications: Strong Spark experience, Python skills, and a passion for data engineering.
The predicted salary is between 70000 - 90000 £ per year.
12-month rolling (multi-year programme) 1 day/week – St Paul's, London
We're hiring a Senior & Lead Data Engineer to build a Databricks lakehouse platform in a high-performance, business-critical Front office trading environment. This is a hands-on engineering role focused on building and optimising large-scale distributed data systems. This is a highly technical team operating at scale, we're looking for engineers with deep data engineering expertise, strong low-level Spark knowledge, and experience building high-performance systems using modern Databricks and AI-driven platforms.
What you'll do:
- Build and optimise Spark pipelines on Databricks
- Develop a lakehouse platform (Medallion architecture)
- Own data modelling, architecture, and pipeline design
- Work with large-scale data (TB–PB)
- Drive performance, scalability, and reliability in production
What we're looking for:
- Strong experience running Spark workloads in production
- Proven ability to optimise Spark at scale (Tb/PB datasets)
- Solid Python (Scala beneficial, not essential)
- Experience with data modelling and lakehouse architecture
- Ability to debug and improve performance in distributed systems
Important:
- Must have recent, hands-on Spark experience
- Databricks strongly preferred (not essential if Spark depth is very strong)
- Experience supporting AI/ML or advanced analytics platforms is a big plus
Nice to have:
- Experience in performance-critical environments
Not a fit if:
- Primarily BI / reporting focused
- Spark used only at small scale or outside production
- No experience with performance optimisation in distributed systems
We're looking for engineers who can design, build, and optimise Spark-based systems at scale and operate effectively in a performance-critical environment from day one.
Senior Data Engineer employer: CipherTek Recruitment
Contact Detail:
CipherTek Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with Spark and Databricks. A friendly chat can lead to insider info about job openings or even referrals.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving large-scale data systems and Spark pipelines. This will give potential employers a taste of what you can do before they even meet you.
✨Tip Number 3
Prepare for technical interviews by brushing up on your Spark knowledge and problem-solving skills. Practice coding challenges that focus on performance optimisation and distributed systems to impress the interviewers.
✨Tip Number 4
Don’t forget to apply through our website! We’re always on the lookout for talented engineers like you. Plus, applying directly shows your enthusiasm and commitment to joining our team.
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Spark and Databricks. We want to see how you've built and optimised data systems in the past, so don’t hold back on those details!
Showcase Your Projects: Include specific projects where you’ve worked with large-scale data and performance optimisation. We love seeing real-world examples of your skills in action, especially if they relate to the role.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Tell us why you're passionate about data engineering and how your experience aligns with our needs. Keep it engaging and relevant to the job description.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!
How to prepare for a job interview at CipherTek Recruitment
✨Know Your Spark Inside Out
Make sure you brush up on your Spark knowledge before the interview. Be ready to discuss your experience with running Spark workloads in production and how you've optimised them at scale. Prepare specific examples of challenges you've faced and how you overcame them.
✨Familiarise Yourself with Databricks
Even if you don't have extensive experience with Databricks, it's crucial to understand its core functionalities and how it integrates with Spark. We recommend exploring online resources or tutorials to get a grasp of the lakehouse architecture and Medallion design principles.
✨Showcase Your Data Modelling Skills
Be prepared to talk about your experience with data modelling and pipeline design. Think of instances where you've had to make architectural decisions and how those impacted performance and scalability. This will demonstrate your ability to own these aspects in a high-performance environment.
✨Prepare for Technical Questions
Expect technical questions that test your problem-solving skills in distributed systems. Practice debugging scenarios and think through how you would improve performance in a production setting. This will show that you're not just familiar with the theory but can apply it practically.