Research Scientist / Engineer – Training Infrastructure (Copy) in London
Research Scientist / Engineer – Training Infrastructure (Copy)

Research Scientist / Engineer – Training Infrastructure (Copy) in London

London Full-Time 3000 - 4500 £ / month (est.) No home office possible
L

At a Glance

  • Tasks: Build and optimise distributed systems for training large-scale multimodal AI models.
  • Company: Join Luma, a pioneering company in multimodal AI innovation.
  • Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
  • Why this job: Be at the forefront of AI technology and make a real impact on future innovations.
  • Qualifications: Experience with PyTorch, CUDA, and distributed systems is essential.
  • Other info: Dynamic team environment with exciting challenges and career advancement opportunities.

The predicted salary is between 3000 - 4500 £ per month.

Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.

About the Role

The Training Infrastructure team at Luma is responsible for building and maintaining the distributed systems that enable training of our large-scale multimodal models across thousands of GPUs. This team ensures our researchers can focus on innovation while having access to reliable, efficient, and scalable training infrastructure that pushes the boundaries of what’s possible in AI model development. We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA and distributed systems. You will work alongside the rest of the research team to build & train cutting edge foundation models on thousands of GPUs that are built to scale from the ground up.

Responsibilities

  • Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
  • Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
  • Build monitoring, visualization, and debugging tools for large-scale training runs
  • Optimize training stability, convergence, and resource utilization across massive clusters

Experience

  • Extensive experience with distributed PyTorch training and parallelisms in foundation model training
  • Deep understanding of GPU clusters, networking, and storage systems
  • Familiarity with communication libraries (NCCL, MPI) and distributed system optimization
  • (Preferred) Strong Linux systems administration and scripting capabilities
  • (Preferred) Experience managing training runs across >100 GPUs
  • (Preferred) Experience with containerization, orchestration, and cloud infrastructure

Research Scientist / Engineer – Training Infrastructure (Copy) in London employer: lumalabs.ai

At Luma, we are committed to fostering a dynamic and innovative work environment where our employees can thrive. As a Research Scientist/Engineer in Training Infrastructure, you will be part of a collaborative team dedicated to pushing the boundaries of AI technology, with access to cutting-edge resources and opportunities for professional growth. Our culture prioritises creativity and teamwork, ensuring that every team member contributes to meaningful projects that expand human capabilities.
L

Contact Detail:

lumalabs.ai Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Research Scientist / Engineer – Training Infrastructure (Copy) in London

Tip Number 1

Network like a pro! Reach out to people in the industry, attend meetups, and connect with professionals on LinkedIn. We can’t stress enough how important it is to make those connections; you never know who might have the inside scoop on job openings.

Tip Number 2

Show off your skills! Create a portfolio or GitHub repository showcasing your projects, especially those related to distributed systems and AI. This gives potential employers a taste of what you can do and sets you apart from the crowd.

Tip Number 3

Prepare for interviews by brushing up on technical questions and problem-solving scenarios. We recommend practicing coding challenges and discussing your thought process out loud. This will help you shine during those crucial interview moments.

Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are genuinely interested in joining our mission at Luma.

We think you need these skills to ace Research Scientist / Engineer – Training Infrastructure (Copy) in London

PyTorch
CUDA
Distributed Systems
Parallelization Techniques
FSDP
Tensor Parallel
Pipeline Parallel
Expert Parallel
Monitoring Tools
Visualization Tools
Debugging Tools
GPU Clusters
Networking
Storage Systems
NCCL
MPI
Linux Systems Administration
Scripting
Containerization
Orchestration
Cloud Infrastructure

Some tips for your application 🫡

Tailor Your CV: Make sure your CV is tailored to the role of Research Scientist/Engineer. Highlight your experience with distributed systems, PyTorch, and any relevant projects that showcase your skills in building scalable training infrastructure.

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about multimodal AI and how your background aligns with our mission at Luma. Be specific about your experience with GPU clusters and parallelization techniques.

Showcase Your Problem-Solving Skills: In your application, don’t just list your skills—show us how you've used them to solve complex problems in the past. Share examples of challenges you've faced in distributed training and how you overcame them.

Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of applications and ensures you get the best chance to showcase your talents directly to our team!

How to prepare for a job interview at lumalabs.ai

Know Your Tech Inside Out

Make sure you’re well-versed in PyTorch, CUDA, and distributed systems. Brush up on advanced parallelization techniques like FSDP and Tensor Parallel. Being able to discuss these topics confidently will show that you’re not just familiar with the tools, but you can also apply them effectively.

Showcase Your Problem-Solving Skills

Prepare to discuss specific challenges you've faced in previous roles, especially those related to distributed training systems. Use the STAR method (Situation, Task, Action, Result) to structure your answers, highlighting how you tackled complex problems and what the outcomes were.

Demonstrate Your Understanding of Infrastructure

Familiarise yourself with GPU clusters, networking, and storage systems. Be ready to explain how you’ve optimised resource utilisation in past projects. This will demonstrate your ability to contribute to building scalable training infrastructure right from the get-go.

Ask Insightful Questions

Prepare thoughtful questions about the team’s current projects and challenges. Inquire about their approach to monitoring and debugging large-scale training runs. This shows your genuine interest in the role and helps you assess if the company aligns with your career goals.

Research Scientist / Engineer – Training Infrastructure (Copy) in London
lumalabs.ai
Location: London

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

L
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>