At a Glance
- Tasks: Build and optimise Spark pipelines on Databricks in a high-performance trading environment.
- Company: Dynamic tech firm in London focused on innovative data solutions.
- Benefits: Competitive salary, flexible working hours, and opportunities for professional growth.
- Other info: Hands-on role with excellent career advancement potential.
- Why this job: Join a cutting-edge team and make an impact in the world of data engineering.
- Qualifications: Strong Spark experience, Python skills, and ability to optimise large-scale systems.
The predicted salary is between 60000 - 80000 £ per year.
12-month rolling (multi-year programme) 1 day/week – St Paul’s, London
We’re hiring a Senior & Lead Data Engineer to build a Databricks lakehouse platform in a high-performance, business-critical Front office trading environment. This is a hands-on engineering role focused on building and optimising large-scale distributed data systems. This is a highly technical team operating at scale, we’re looking for engineers with deep data engineering expertise, strong low-level Spark knowledge, and experience building high-performance systems using modern Databricks and AI-driven platforms.
What you’ll do:
- Build and optimise Spark pipelines on Databricks
- Develop a lakehouse platform (Medallion architecture)
- Own data modelling, architecture, and pipeline design
- Work with large-scale data (TB–PB)
- Drive performance, scalability, and reliability in production
What we’re looking for:
- Strong experience running Spark workloads in production
- Proven ability to optimise Spark at scale (Tb/PB datasets)
- Solid Python (Scala beneficial, not essential)
- Experience with data modelling and lakehouse architecture
- Ability to debug and improve performance in distributed systems
Important:
- Must have recent, hands-on Spark experience
- Databricks strongly preferred (not essential if Spark depth is very strong)
- Experience supporting AI/ML or advanced analytics platforms is a big plus
Nice to have:
- Experience in performance-critical environments
Not a fit if:
- Primarily BI / reporting focused
- Spark used only at small scale or outside production
- No experience with performance optimisation in distributed systems
We’re looking for engineers who can design, build, and optimise Spark-based systems at scale and operate effectively in a performance-critical environment from day one.
Senior Data Engineer employer: CipherTek Recruitment
Contact Detail:
CipherTek Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with Spark and Databricks. A friendly chat can lead to insider info about job openings or even referrals.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving large-scale data systems and Spark pipelines. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for technical interviews by brushing up on your Spark knowledge and problem-solving skills. Practice coding challenges related to data engineering and be ready to discuss your past experiences with performance optimisation.
✨Tip Number 4
Don’t forget to apply through our website! We’re always on the lookout for talented engineers like you. Keep an eye on our listings and make sure your application stands out by highlighting your hands-on experience with Spark and Databricks.
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Spark and Databricks. We want to see how you've built and optimised data systems, so don’t hold back on the details!
Showcase Your Projects: Include specific projects where you’ve worked with large-scale data and performance optimisation. We love seeing real-world examples of your skills in action!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Tell us why you're passionate about data engineering and how your expertise aligns with our needs. Keep it engaging and relevant!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role!
How to prepare for a job interview at CipherTek Recruitment
✨Know Your Spark Inside Out
Make sure you brush up on your Spark knowledge before the interview. Be ready to discuss how you've optimised Spark workloads in production, especially with large datasets. Prepare examples of challenges you've faced and how you overcame them.
✨Showcase Your Databricks Experience
If you've worked with Databricks, highlight your experience with it during the interview. Discuss any lakehouse platforms you've built or contributed to, and be prepared to explain the Medallion architecture and its benefits.
✨Demonstrate Problem-Solving Skills
Expect technical questions that test your ability to debug and improve performance in distributed systems. Think of specific scenarios where you had to troubleshoot issues and how you approached solving them. This will show your hands-on engineering skills.
✨Align with Their Performance-Critical Focus
Since this role is in a performance-critical environment, be ready to discuss your experience in such settings. Share examples of how you've driven performance, scalability, and reliability in your previous projects, and how you can bring that expertise to their team.