At a Glance
- Tasks: Design and optimise algorithms for pricing and forecasting in a data-driven environment.
- Company: Join a leading SaaS platform in a complex, regulated industry.
- Benefits: Remote work, competitive salary, and opportunities for professional growth.
- Why this job: Make an impact by building scalable solutions with cutting-edge technology.
- Qualifications: Strong Python skills and experience with distributed data processing.
- Other info: Collaborative team culture with mentorship and technical leadership opportunities.
The predicted salary is between 48000 - 72000 £ per year.
Location: UK (O/IR35) / Belgium / Netherlands / Germany (B2B)
Working model: Remote
Start: ASAP
We’re hiring a Senior Algorithm Engineer to join a data‑intensive SaaS platform operating in a complex, regulated industry. This is a hands‑on senior IC role focused on building and optimising distributed data pipelines that power pricing, forecasting and billing calculations at scale. This is not an ML / Data Science / GenAI role.
What You’ll Be Doing
- Design, build and deploy algorithms/data models supporting pricing, forecasting and optimisation use cases in production
- Develop and optimise distributed Spark / PySpark batch pipelines for large‑scale data processing
- Write production‑grade Python workflows implementing complex, explainable business logic
- Work with Databricks for job execution, orchestration and optimisation
- Improve pipeline performance, reliability and cost efficiency across high‑volume workloads
- Collaborate with engineers and domain specialists to translate requirements into scalable solutions
- Provide senior‑level ownership through technical leadership, mentoring and best‑practice guidance
Key experience required
- Proven experience delivering production algorithms/data models (forecasting, pricing, optimisation or similar)
- Strong Python proficiency and modern data stack exposure (SQL, Pandas/NumPy + PySpark; Dask/Polars/DuckDB a bonus)
- Build, schedule and optimise Spark/PySpark pipelines in Databricks (Jobs/workflows, performance tuning, production delivery)
- Hands‑on experience with distributed systems and scalable data processing (Spark essential)
- Experience working with large‑scale/high‑frequency datasets (IoT/telemetry, smart meter, weather, time‑series)
- Clear communicator able to influence design decisions, align stakeholders and operate autonomously
Nice to have
- Energy/utilities domain exposure
- Cloud ownership experience (AWS preferred, Azure also relevant)
- Experience defining microservices / modular components supporting data products
Senior Algorithm Engineer (Python/Spark–Distributed Processing) - Xcede in Chew Magna employer: Jobster
Contact Detail:
Jobster Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Algorithm Engineer (Python/Spark–Distributed Processing) - Xcede in Chew Magna
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. We all know that sometimes it’s not just what you know, but who you know that can help you land that Senior Algorithm Engineer role.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your Python and Spark projects. We want to see how you’ve tackled distributed data processing challenges. This is your chance to shine and demonstrate your hands-on experience!
✨Tip Number 3
Prepare for those interviews! Brush up on your technical knowledge and be ready to discuss your past projects in detail. We recommend practising common algorithm questions and explaining your thought process clearly.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. We’re excited to see how you can contribute to our data-intensive SaaS platform!
We think you need these skills to ace Senior Algorithm Engineer (Python/Spark–Distributed Processing) - Xcede in Chew Magna
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Python and Spark, especially in building and optimising distributed data pipelines. We want to see how your skills align with the role, so don’t be shy about showcasing relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about this role and how your background fits the bill. We love seeing enthusiasm and a clear understanding of what we do at StudySmarter.
Showcase Your Technical Skills: When detailing your experience, focus on specific technologies and methodologies you've used, like Databricks or PySpark. We’re looking for concrete examples that demonstrate your ability to handle large-scale data processing.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it makes the process smoother for everyone involved!
How to prepare for a job interview at Jobster
✨Know Your Algorithms
Make sure you brush up on your knowledge of algorithms and data models, especially those related to pricing, forecasting, and optimisation. Be ready to discuss specific examples from your past work where you've designed or optimised these models.
✨Show Off Your Python Skills
Since strong Python proficiency is key for this role, prepare to demonstrate your coding skills. You might be asked to solve a problem on the spot, so practice writing clean, efficient code that implements complex business logic.
✨Familiarise Yourself with Spark
Get comfortable with Spark and PySpark, particularly in the context of building and optimising distributed data pipelines. Be prepared to discuss your experience with Databricks and how you've improved pipeline performance in previous roles.
✨Communicate Clearly
As a senior-level candidate, you'll need to influence design decisions and align stakeholders. Practice articulating your thought process clearly and concisely, and think of examples where you've successfully collaborated with engineers and domain specialists.