At a Glance
- Tasks: Build and manage cutting-edge ML and cloud infrastructure for generative AI projects.
- Company: Join SpAItial, a leader in generative AI and 3D modelling.
- Benefits: Competitive salary, inclusive culture, and opportunities for professional growth.
- Other info: Diverse workplace committed to equal opportunity and inclusion.
- Why this job: Be part of a team redefining industries with innovative 3D technology.
- Qualifications: 3+ years in cloud engineering, strong skills in GPU compute and automation.
The predicted salary is between 60000 - 80000 £ per year.
SpAItial is pioneering the next generation of World Models, pushing the boundaries of generative AI, computer vision, and simulation. We are moving beyond 2D pixels to build models that natively understand the physics and geometry of our world. Our mission is to redefine how industries, from robotics and AR/VR to gaming and cinema, generate and interact with physically-grounded 3D environments.
We’re looking for bold, innovative individuals driven by a passion for tackling hard problems in generative 3D AI. You should thrive in an environment where creativity meets technical challenge, take pride in craft, and collaborate closely with a small team building frontier systems.
We are seeking a Machine Learning & Cloud Infra Engineer to build and own the infrastructure that powers our World Model research and productization. You will design, implement, and operate scalable training and data systems for large diffusion-based generative models, spanning GPU clusters, storage, orchestration, and reliable model serving. This role is hands-on and systems-focused, enabling researchers and engineers to train, evaluate, and deploy world-scale models efficiently and safely.
Responsibilities
- Own and evolve the ML + cloud infrastructure that enables training and evaluation of massive foundation models.
- Design and operate GPU clusters: Provision, scale, and maintain multi-node, multi-GPU training environments (on cloud and/or on-prem), including scheduling, quotas, and capacity planning.
- Distributed training enablement: Support high-throughput training stacks (e.g., PyTorch DDP/FSDP, NCCL) and ensure performance, stability, and reproducibility across large runs.
- Storage and data throughput: Build and optimize storage systems and networking for petabyte-scale datasets and high-bandwidth training (object storage, NVMe, shared filesystems, caching, data locality).
- Containerization and orchestration: Package and deploy workloads with Docker and Kubernetes (or comparable systems); maintain infrastructure-as-code (Terraform) and reliable release processes.
- Observability and reliability: Implement monitoring, logging, and alerting for cluster health, job performance, and cost; define SLOs and on-call/incident response practices.
- Security and access: Manage secrets, IAM, and secure network boundaries for research and production systems.
- Collaboration: Partner closely with ML researchers and engineers to unblock training, iterate on tooling, and improve developer experience.
- Production pathways: Support model evaluation and serving infrastructure where needed, and ensure smooth transitions from research to deployable systems.
Key Qualifications
- 3+ years of professional experience in infrastructure, platform, or cloud engineering (ML infrastructure experience strongly preferred).
- Hands-on experience with GPU compute and performance debugging (CUDA/NCCL concepts, GPU utilization, networking bottlenecks, profiling).
- Strong experience operating cloud environments (AWS, GCP, or Azure), including networking, IAM, and cost management.
- Proficiency with containers and orchestration (Docker, Kubernetes) and infrastructure-as-code (Terraform).
- Strong scripting and automation skills (Python plus Bash/PowerShell).
- Familiarity with distributed training and modern ML stacks (PyTorch; DDP/FSDP or comparable).
- Experience with monitoring and observability tooling (Prometheus/Grafana, OpenTelemetry, ELK, or similar).
- Experience building CI/CD for infra and ML workflows (e.g., CircleCI, GitHub Actions).
At SpAItial, we are committed to creating a diverse and inclusive workplace. We welcome applications from people of all backgrounds, experiences, and perspectives. We are an equal opportunity employer and ensure all candidates are treated fairly throughout the recruitment process.
Machine Learning & Cloud Infra Engineer in London employer: SpAItial
Contact Detail:
SpAItial Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Machine Learning & Cloud Infra Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those related to machine learning and cloud infrastructure. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on technical concepts and problem-solving skills. Practice coding challenges and be ready to discuss your past experiences in detail. Confidence is key!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who are genuinely interested in joining our mission at SpAItial.
We think you need these skills to ace Machine Learning & Cloud Infra Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV speaks directly to the role of Machine Learning & Cloud Infra Engineer. Highlight your experience with GPU clusters, cloud environments, and any relevant projects that showcase your skills in generative AI and infrastructure.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about generative 3D AI and how your background aligns with our mission at SpAItial. Share specific examples of challenges you've tackled and how you thrive in collaborative environments.
Showcase Your Technical Skills: Don’t just list your skills; demonstrate them! Include links to projects or GitHub repositories where we can see your work with Docker, Kubernetes, or any ML stacks you've used. This gives us a real sense of your capabilities.
Apply Through Our Website: We encourage you to apply through our website for a smoother application process. It helps us keep track of your application and ensures you don’t miss out on any important updates from us!
How to prepare for a job interview at SpAItial
✨Know Your Tech Inside Out
Make sure you’re well-versed in the technologies mentioned in the job description, especially around GPU compute and cloud environments. Brush up on your knowledge of CUDA, NCCL, and the specific cloud platforms like AWS or GCP. Being able to discuss these topics confidently will show that you’re not just familiar but truly engaged with the role.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in previous roles, particularly those related to infrastructure and machine learning. Think about how you approached these problems, what solutions you implemented, and the outcomes. This will demonstrate your hands-on experience and ability to tackle complex issues.
✨Collaborate and Communicate
Since this role involves working closely with ML researchers and engineers, be ready to talk about your collaboration experiences. Share examples of how you’ve worked in teams to improve processes or unblock projects. Highlighting your communication skills will show that you can thrive in a collaborative environment.
✨Prepare Questions That Matter
Have thoughtful questions ready for your interviewers that reflect your interest in SpAItial’s mission and the role itself. Ask about their current projects, the team dynamics, or how they envision the future of generative AI. This not only shows your enthusiasm but also helps you gauge if the company is the right fit for you.