At a Glance
- Tasks: Design and optimise distributed training systems for cutting-edge AI models.
- Company: Join Luma AI, a leader in multimodal AI innovation.
- Benefits: Competitive salary, remote work options, and opportunities for professional growth.
- Other info: Dynamic team environment with significant career advancement potential.
- Why this job: Be at the forefront of AI technology and make a real impact on future innovations.
- Qualifications: Experience with PyTorch, CUDA, and distributed systems is essential.
Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. We are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
The Training Infrastructure team at Luma is responsible for building and maintaining the distributed systems that enable training of our large-scale multimodal models across thousands of GPUs. This team ensures our researchers can focus on innovation while having access to reliable, efficient, and scalable training infrastructure that pushes the boundaries of what’s possible in AI model development.
We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA, and distributed systems to build and train cutting-edge foundation models on thousands of GPUs.
Responsibilities
- Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
- Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
- Build monitoring, visualization, and debugging tools for large-scale training runs
- Optimize training stability, convergence, and resource utilization across massive clusters
Experience
- Extensive experience with distributed PyTorch training and parallelisms in foundation model training
- Deep understanding of GPU clusters, networking, and storage systems
- Familiarity with communication libraries (NCCL, MPI) and distributed system optimization
Preferred:
- Strong Linux systems administration and scripting capabilities
- Experience managing training runs across >100 GPUs
- Experience with containerization, orchestration, and cloud infrastructure
Compensation
The base pay range for this role is $187,500 – $395,000 per year.
Research Scientist / Engineer – Training Infrastructure employer: LUMA
Contact Detail:
LUMA Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Research Scientist / Engineer – Training Infrastructure
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with Luma AI employees on LinkedIn. Building relationships can open doors that applications alone can't.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects related to distributed systems and PyTorch. This gives us a tangible way to see what you can bring to the table.
✨Tip Number 3
Prepare for technical interviews by brushing up on your knowledge of GPU clusters and parallelization techniques. We want to see how you think through problems, so practice explaining your thought process clearly.
✨Tip Number 4
Apply directly through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're genuinely interested in joining our team at Luma AI.
We think you need these skills to ace Research Scientist / Engineer – Training Infrastructure
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the role of Research Scientist/Engineer. Highlight your experience with distributed systems, PyTorch, and CUDA. We want to see how your skills align with our mission at Luma AI!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Share your passion for multimodal AI and how you can contribute to our Training Infrastructure team. Let us know why you're excited about the opportunity to work with cutting-edge technology.
Showcase Relevant Projects: If you've worked on projects involving large-scale training or GPU clusters, make sure to mention them! We love seeing real-world applications of your skills, so don’t hold back on the details.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team!
How to prepare for a job interview at LUMA
✨Know Your Tech Inside Out
Make sure you’re well-versed in PyTorch, CUDA, and distributed systems. Brush up on advanced parallelization techniques like FSDP and Tensor Parallel. Being able to discuss these topics confidently will show that you’re not just familiar with the tools, but you can also innovate with them.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in previous roles, especially those related to large-scale training systems. Use the STAR method (Situation, Task, Action, Result) to structure your answers and highlight how you tackled complex problems effectively.
✨Demonstrate Your Understanding of Infrastructure
Familiarise yourself with GPU clusters, networking, and storage systems. Be ready to explain how you’ve optimised resource utilisation in past projects. This will help you connect your experience to the responsibilities of the role at Luma AI.
✨Ask Insightful Questions
Prepare thoughtful questions about Luma's current projects and future goals. Inquire about their approach to building scalable training infrastructure and how they tackle challenges in multimodal AI. This shows your genuine interest in the company and the role.