At a Glance
- Tasks: Build and optimise ML systems for training and serving cutting-edge generative models.
- Company: Join SpAItial, a leader in generative AI and 3D modelling.
- Benefits: Competitive salary, inclusive culture, and opportunities for professional growth.
- Other info: Collaborative environment with a focus on creativity and innovation.
- Why this job: Work on groundbreaking technology that shapes the future of industries like gaming and robotics.
- Qualifications: 3+ years in Python, experience with ML systems, and strong problem-solving skills.
The predicted salary is between 60000 - 80000 £ per year.
SpAItial is pioneering the next generation of World Models, pushing the boundaries of generative AI, computer vision, and simulation. We are moving beyond 2D pixels to build models that natively understand the physics and geometry of our world. Our mission is to redefine how industries, from robotics and AR/VR to gaming and cinema, generate and interact with physically-grounded 3D environments.
We’re looking for bold, innovative individuals driven by a passion for tackling hard problems in generative 3D AI. You should thrive in an environment where creativity meets technical challenge, take pride in craft, and collaborate closely with a small team building frontier systems.
We are seeking a Machine Learning Systems & Infrastructure Engineer to build and own the systems that turn raw real-world data into trained world models and reliable production endpoints. You will design, implement, and operate scalable training stacks, data ingestion pipelines, experiment orchestration, and model serving for large diffusion-based generative models. The role is hands-on and code-heavy — you will work inside the same monorepo as the research team, mostly in Python, and should be as comfortable refactoring a trainer class or a dataset loader as you are writing Terraform.
Responsibilities
- Own and evolve the ML systems that enable training, evaluation, and serving of large foundation models — trainer, dataset loaders, checkpointing, and experiment orchestration code.
- Distributed training enablement: Improve high-throughput training stacks (e.g., PyTorch DDP/FSDP, NCCL) for performance, stability, and reproducibility, including preemption-safe and sharded checkpointing.
- Data systems and pipelines: Build end-to-end Python pipelines that turn third-party capture sources into clean, versioned training datasets — including scraping (e.g., Playwright) and preprocessing — and optimize the underlying storage at petabyte scale (object storage, fuse mounts, caching layers, shared filesystems, and relational / analytical / embedded metadata stores).
- ML workflow orchestration and serving: Operate the systems researchers use to launch experiments, data jobs, and production endpoints — workflow engines (e.g., Kubeflow Pipelines, Airflow), GPU schedulers (e.g., Volcano, Slurm), experiment trackers (e.g., MLflow, Weights & Biases), and managed-inference platforms (e.g., Modal, Triton) — and maintain a launcher SDK for one-command runs.
- Containerization and packaging: Ship workloads with Docker and Kubernetes; maintain IaC (Terraform) for the surfaces you own and CI/CD pipelines, including self-hosted GPU runners.
- Observability and reliability: Monitoring, logging, and alerting for job performance, data-pipeline health, and cost (e.g., Prometheus/Grafana, OpenTelemetry); define SLOs and incident response for the systems you own.
- Security and access: Manage secrets, IAM, and network boundaries (e.g., Tailscale, cloud VPC) for the systems you own.
- Collaboration: Partner with ML researchers, engineers, and the platform team to unblock training and data work and improve developer experience.
Key Qualifications
- 3+ years writing production-quality Python in a large, multi-author codebase, with strong SWE fundamentals (ML systems experience strongly preferred).
- Hands-on with modern ML training stacks (PyTorch; DDP/FSDP or comparable); have personally debugged distributed jobs across many GPUs and nodes.
- Have shipped non-trivial end-to-end data pipelines at scale — ingestion, transformation, validation, versioning, republish — ideally including real-world sources with rate limits, auth, or undocumented APIs.
- Hands-on GPU compute and performance debugging (CUDA/NCCL, GPU utilization, networking bottlenecks, profiling).
- Working knowledge of cloud environments (AWS, GCP, or Azure), including object storage, IAM, and cost awareness.
- Proficient with containers (Docker, Kubernetes) and comfortable reading and writing IaC (Terraform) for the surfaces you ship.
- Strong working knowledge of how to store and query large datasets at scale: SQL fundamentals; relational (e.g., Postgres), analytical (e.g., BigQuery, Snowflake), and embedded (e.g., SQLite) stores; and object storage with caching layers. Familiarity with ML workflow orchestration and experiment tracking (e.g., Kubeflow Pipelines, MLflow).
- Experience with monitoring and observability tooling (e.g., Prometheus/Grafana, OpenTelemetry) and CI/CD for infra and ML workflows (e.g., GitHub Actions).
At SpAItial, we are committed to creating a diverse and inclusive workplace. We welcome applications from people of all backgrounds, experiences, and perspectives. We are an equal opportunity employer and ensure all candidates are treated fairly throughout the recruitment process.
Machine Learning Systems & Infrastructure Engineer in London employer: SpAItial
Contact Detail:
SpAItial Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Machine Learning Systems & Infrastructure Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those related to machine learning systems and infrastructure. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on technical concepts and coding challenges relevant to the role. Practice explaining your thought process clearly, as communication is key when collaborating with teams.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our innovative team at SpAItial.
We think you need these skills to ace Machine Learning Systems & Infrastructure Engineer in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your experience with Python and ML systems. We want to see how your skills align with the role, so don’t hold back on showcasing relevant projects!
Show Your Passion: Let us know why you’re excited about generative AI and 3D environments! A genuine passion for tackling hard problems can really make your application stand out.
Be Clear and Concise: When writing your application, keep it straightforward. Use clear language and avoid jargon unless it’s relevant. We appreciate a well-structured application that gets straight to the point.
Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to receive your details and ensures you’re considered for the role. We can’t wait to hear from you!
How to prepare for a job interview at SpAItial
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, especially Python, PyTorch, and Docker. Brush up on your experience with distributed training and data pipelines, as these will likely come up during technical discussions.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in previous roles, particularly those related to ML systems and infrastructure. Be ready to explain how you approached these problems and what solutions you implemented, as this will demonstrate your hands-on experience.
✨Familiarise Yourself with Their Projects
Research SpAItial’s work in generative AI and 3D environments. Understanding their mission and recent projects will help you align your answers with their goals and show that you’re genuinely interested in contributing to their vision.
✨Ask Insightful Questions
Prepare thoughtful questions about their ML workflows, team dynamics, and future projects. This not only shows your enthusiasm but also helps you gauge if the company culture and role are a good fit for you.