At a Glance
- Tasks: Design and optimise high-performance data pipelines for groundbreaking ML research.
- Company: Join a mission-driven AI research organisation focused on animal communication.
- Benefits: Competitive salary up to £160k, fully remote work, and flexible hours.
- Why this job: Build infrastructure that directly enables new scientific discoveries in ML.
- Qualifications: 5+ years in backend/infrastructure engineering with strong Python skills.
- Other info: Collaborate with researchers in a dynamic, high-performance environment.
The predicted salary is between 72000 - 108000 £ per year.
Do you want to build infrastructure that enables entirely new scientific discovery? Have you scaled data systems for ML workloads where performance actually matters? Are you ready to own foundational infrastructure at global, research-grade scale? We’re working with a mission-driven AI research organisation applying advanced multimodal machine learning (audio, spatial, sensor, text, etc.) to understand animal communication. Operating at the intersection of ML research, large-scale data infrastructure, and real-world biological data, this team functions like a high-performance research lab with production-grade engineering standards. They are entering a growth phase and are scaling the core systems that power distributed AI research, large multimodal datasets, and public-facing data platforms. This is not consumer tech or ad optimisation - the infrastructure you build directly enables new scientific discovery.
The role is focused on designing, scaling, and hardening the data and backend platforms that support distributed ML workloads and TB–PB scale datasets. You’ll work closely with researchers and engineers to turn experimental systems into reliable, high-performance, production infrastructure.
Key responsibilities- Design and optimise high-performance data pipelines for large, heterogeneous datasets
- Scale public-facing data infrastructure supporting ML research
- Optimise distributed AI workloads for latency, throughput, reliability, and GPU utilisation
- Build observability tooling for data quality, pipeline health, and experiments
- Support GPU infrastructure for large-scale model training
- Translate research prototypes into robust, production systems
- Scope and supervise work for interns, PhDs, and post-docs
- Salary: up to £150k (may be flexibility to £160k) (UK)
- Working model: Fully remote (UK)
- Tech stack: Python, Kubernetes, Docker, Terraform, GCP, GPU clusters
- Visa: No sponsorship
- Seniority: 5+ years backend / infrastructure engineering (flexible for exceptional profiles)
Senior ML Infrastructure Engineer employer: Harnham
Contact Detail:
Harnham Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior ML Infrastructure Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in the industry on LinkedIn or at meetups. We can’t stress enough how personal connections can open doors that applications alone can’t.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repo showcasing your projects, especially those related to ML infrastructure. We love seeing real-world applications of your expertise!
✨Tip Number 3
Prepare for interviews by brushing up on technical questions and system design scenarios. We recommend doing mock interviews with friends or using platforms that simulate the experience.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we’re always looking for passionate individuals ready to make an impact.
We think you need these skills to ace Senior ML Infrastructure Engineer
Some tips for your application 🫡
Show Your Passion for Science: When writing your application, let your enthusiasm for scientific discovery shine through. We want to see how your experience in ML infrastructure can contribute to groundbreaking research, so don’t hold back on sharing your passion!
Tailor Your Experience: Make sure to highlight your relevant experience with data systems and ML workloads. We’re looking for someone who has scaled infrastructure at a global level, so be specific about your achievements and the impact they had.
Be Clear and Concise: Keep your application straightforward and to the point. We appreciate clarity, so avoid jargon unless it’s necessary. Use bullet points where possible to make your skills and experiences stand out.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity. Don’t miss out!
How to prepare for a job interview at Harnham
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, like Python, Kubernetes, and Docker. Brush up on your experience with GCP and GPU clusters, as these will likely come up during technical discussions.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in scaling data systems for ML workloads. Use examples that highlight your ability to optimise performance and reliability, as this role is all about turning experimental systems into robust infrastructure.
✨Understand the Research Context
Familiarise yourself with the intersection of ML research and biological data. Being able to speak knowledgeably about how your work can contribute to scientific discovery will impress the interviewers and show your alignment with their mission.
✨Ask Insightful Questions
Prepare thoughtful questions about the team’s current projects and future goals. This not only shows your interest but also gives you a chance to assess if the company culture and objectives align with your career aspirations.