At a Glance
- Tasks: Lead the development of distributed training support for cutting-edge ML models.
- Company: Join Annapurna Labs, a pioneering team within AWS, driving cloud innovation.
- Benefits: Enjoy flexible work-life balance, mentorship opportunities, and a culture of inclusion.
- Why this job: Be part of a team that tackles groundbreaking challenges in AI/ML and cloud technology.
- Qualifications: 5+ years in software development and machine learning; strong Python skills required.
- Other info: Diverse experiences are welcomed; apply even if you don't meet every qualification.
The predicted salary is between 43200 - 72000 £ per year.
Sr. Software Engineer- AI/ML, AWS Neuron Distributed Training
Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.
AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) our cloud-scale Machine Learning accelerators. This role is for a Senior Machine Learning Engineer in the Distribute Training team for AWS Neuron, responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive-scale Large Language Models (LLM) such as GPT and Llama, as well as Stable Diffusion, Vision Transformers (ViT) and many more.
The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key.
Key job responsibilities
You will lead efforts to build distributed training support into PyTorch and JAX using XLA, the Neuron compiler, and runtime stacks. You will optimize models to achieve peak performance and maximize efficiency on AWS custom silicon, including Trainium and Inferentia, as well as Trn2, Trn1, Inf1, and Inf2 servers. Strong software development skills, the ability to deep dive, work effectively within cross-functional teams, and a solid foundation in Machine Learning are critical for success in this role.
About the team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.
About the team
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
About AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Inclusive Team Culture
Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
BASIC QUALIFICATIONS
– Bachelor\’s degree in computer science or equivalent
– 5+ years of non-internship professional software development experience
– 5+ years of programming with at least one software programming language experience
– 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
– 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
– Experience as a mentor, tech lead or leading an engineering team
– Experience in machine learning, data mining, information retrieval, statistics or natural language processing
PREFERRED QUALIFICATIONS
– Master\’s degree in computer science or equivalent
– Experience in computer architecture
– Previous software engineering expertise with Pytorch/Jax/Tensorflow, Distributed libraries and Frameworks, End-to-end Model Training.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $151,300/year in our lowest geographic market up to $261,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit . This position will remain posted until filled. Applicants should apply via our internal or external career site.
Important FAQs for current Government employees
Before proceeding, please review the following FAQs
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr
Sr. Software Engineer- AI/ML, AWS Neuron Distributed Training employer: Amazon
Contact Detail:
Amazon Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Sr. Software Engineer- AI/ML, AWS Neuron Distributed Training
✨Tip Number 1
Familiarise yourself with AWS Neuron and its capabilities. Understanding how the software stack interacts with Trainium and Inferentia will give you an edge in discussions during interviews, showcasing your proactive approach to learning.
✨Tip Number 2
Engage with the community around distributed training libraries like PyTorch and JAX. Contributing to open-source projects or forums can help you build a network and demonstrate your expertise in these technologies.
✨Tip Number 3
Prepare to discuss your experience with large-scale ML models, particularly LLMs. Be ready to share specific examples of how you've optimised performance and efficiency in previous projects, as this is crucial for the role.
✨Tip Number 4
Highlight your collaborative skills. Since the role involves working closely with cross-functional teams, be prepared to discuss how you've successfully collaborated with engineers from different disciplines in past projects.
We think you need these skills to ace Sr. Software Engineer- AI/ML, AWS Neuron Distributed Training
Some tips for your application 🫡
Tailor Your CV: Make sure to customise your CV to highlight relevant experience in software development, machine learning, and distributed training. Emphasise your proficiency with Python and any specific libraries mentioned in the job description, such as PyTorch or JAX.
Craft a Strong Cover Letter: Write a compelling cover letter that showcases your passion for AI/ML and your understanding of AWS Neuron. Mention specific projects or experiences that align with the responsibilities of the role, particularly those involving large-scale models and distributed training.
Highlight Leadership Experience: Since the role requires mentoring and leading design efforts, be sure to include examples of your leadership experience. Discuss any previous roles where you led a team or mentored junior engineers, focusing on the impact you made.
Showcase Problem-Solving Skills: In your application, provide examples of how you've tackled complex technical challenges in the past. This could include optimising models for performance or working collaboratively with cross-functional teams to deliver solutions.
How to prepare for a job interview at Amazon
✨Showcase Your Technical Skills
Be prepared to discuss your experience with machine learning frameworks like PyTorch and JAX. Highlight specific projects where you've implemented distributed training solutions, especially using libraries like FSDP and Deepspeed.
✨Understand the Role of AWS Neuron
Familiarise yourself with AWS Neuron and its role in optimising machine learning models on Trainium and Inferentia. Being able to articulate how these technologies work together will demonstrate your genuine interest in the position.
✨Emphasise Collaboration Experience
Since the role involves working closely with chip architects and compiler engineers, share examples of past experiences where you successfully collaborated within cross-functional teams. This will show that you can thrive in a team-oriented environment.
✨Prepare for Problem-Solving Questions
Expect technical questions that assess your problem-solving abilities. Practice explaining your thought process when tackling complex challenges, particularly those related to model optimisation and performance tuning.