At a Glance
- Tasks: Design and optimise algorithms for pricing and forecasting in a data-driven environment.
- Company: Join a leading SaaS platform in a complex, regulated industry.
- Benefits: Remote work, competitive salary, and opportunities for professional growth.
- Why this job: Make a real impact by building scalable solutions with cutting-edge technology.
- Qualifications: Proven experience in Python and distributed data processing with Spark.
- Other info: Collaborative team culture with opportunities for mentorship and technical leadership.
The predicted salary is between 48000 - 72000 £ per year.
We’re hiring a Senior Algorithm Engineer to join a data-intensive SaaS platform operating in a complex, regulated industry. This is a hands-on senior IC role focused on building and optimising distributed data pipelines that power pricing, forecasting and billing calculations at scale.
What you’ll be doing:
- Design, build and deploy algorithms/data models supporting pricing, forecasting and optimisation use cases in production
- Develop and optimise distributed Spark / PySpark batch pipelines for large-scale data processing
- Write production-grade Python workflows implementing complex, explainable business logic
- Work with Databricks for job execution, orchestration and optimisation
- Improve pipeline performance, reliability and cost efficiency across high-volume workloads
- Collaborate with engineers and domain specialists to translate requirements into scalable solutions
- Provide senior-level ownership through technical leadership, mentoring and best-practice guidance
Key experience required:
- Proven experience delivering production algorithms/data models (forecasting, pricing, optimisation or similar)
- Strong Python proficiency and modern data stack exposure (SQL, Pandas/NumPy + PySpark; Dask/Polars/DuckDB a bonus)
- Build, schedule and optimise Spark/PySpark pipelines in Databricks (Jobs/workflows, performance tuning, production delivery)
- Hands-on experience with distributed systems and scalable data processing (Spark essential)
- Experience working with large-scale/high-frequency datasets (IoT/telemetry, smart meter, weather, time-series)
- Clear communicator able to influence design decisions, align stakeholders and operate autonomously
Nice to have:
- Energy/utilities domain exposure
- Cloud ownership experience (AWS preferred, Azure also relevant)
- Experience defining microservices / modular components supporting data products
Senior Algorithm Engineer (Python/Spark-Distributed Processing) employer: Xcede
Contact Detail:
Xcede Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Algorithm Engineer (Python/Spark-Distributed Processing)
✨Tip Number 1
Network like a pro! Reach out to your connections in the industry, attend meetups, and engage in online forums. We all know that sometimes it’s not just what you know, but who you know that can land you that dream job.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving Python and Spark. We recommend using platforms like GitHub to share your code and demonstrate your expertise in building distributed data pipelines.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and problem-solving skills. We suggest practicing common algorithm questions and discussing your past experiences with distributed systems to impress your interviewers.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets noticed. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace Senior Algorithm Engineer (Python/Spark-Distributed Processing)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Python and Spark, especially in building distributed data pipelines. We want to see how your skills align with the role, so don’t be shy about showcasing relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about algorithm engineering and how your background fits our needs. We love seeing enthusiasm and a clear understanding of the role.
Showcase Your Technical Skills: When filling out your application, be specific about your technical expertise. Mention any experience with Databricks, performance tuning, or working with large datasets. We’re looking for those hands-on skills that make you stand out!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy – just follow the prompts!
How to prepare for a job interview at Xcede
✨Know Your Algorithms
Make sure you brush up on your knowledge of algorithms and data models, especially those related to pricing, forecasting, and optimisation. Be ready to discuss specific examples from your past work where you've designed or optimised these models.
✨Show Off Your Python Skills
Since strong Python proficiency is a must, prepare to demonstrate your coding skills. You might be asked to solve problems on the spot, so practice writing clean, efficient code that implements complex business logic.
✨Familiarise Yourself with Spark
As this role heavily involves Spark and PySpark, ensure you understand how to build and optimise distributed pipelines. Be prepared to discuss your experience with Databricks and any performance tuning you've done in the past.
✨Communicate Clearly
This position requires clear communication and the ability to influence design decisions. Practice articulating your thought process and how you've collaborated with engineers and domain specialists to translate requirements into scalable solutions.