At a Glance
- Tasks: Design and implement scalable AWS infrastructure for cutting-edge AI workloads.
- Company: Join a leading tech firm focused on Generative AI innovation.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Why this job: Be at the forefront of AI technology and make a significant impact.
- Qualifications: 7+ years in cloud architecture with strong AWS and AI/ML expertise.
- Other info: Collaborative environment with exciting projects and career advancement potential.
The predicted salary is between 80000 - 100000 ÂŁ per year.
We are seeking an experienced AI Infrastructure Architect with deep expertise in designing and operating scalable, secure, and high‑performance cloud environments for Generative AI and LLM workloads. This role is ideal for someone who combines strong AWS architectural skills with hands‑on experience in GPU compute, MLOps/LLMOps, and enterprise‑grade AI platform design. You should bring extensive experience building cloud‑native AI infrastructure, optimizing large‑scale model training and inference environments, and collaborating closely with AI/ML teams to enable advanced GenAI capabilities. You should bring strong experience in designing complex AI systems, creating detailed technical specifications, and collaborating across multidisciplinary teams to ensure seamless implementation.
Responsibilities
- Design and implement scalable AWS infrastructure to support Generative AI and LLM workloads, including training, fine‑tuning, and inference.
- Architect secure, high‑performance environments using AWS core services such as Amazon SageMaker, Amazon Bedrock, Amazon EKS, AWS Lambda, and related cloud‑native components.
- Design GPU‑based compute environments (e.g., EC2 P‑series, G‑series) optimized for distributed training, fine‑tuning, and low‑latency inference.
- Implement secure VPC architectures, private endpoints, IAM policies, encryption (KMS), and enterprise‑grade data governance controls.
- Build and govern MLOps/LLMOps pipelines using SageMaker Pipelines, CodePipeline, and CI/CD best practices.
- Architect RAG infrastructure, including vector databases (OpenSearch, Aurora PostgreSQL with pgvector) and scalable storage solutions (S3).
- Establish monitoring and observability using CloudWatch, model monitoring tools, logging frameworks, and performance dashboards.
- Optimize infrastructure for latency, autoscaling, high availability, and cost efficiency, leveraging Spot Instances, Savings Plans, and right‑sizing strategies.
- Define disaster recovery (DR) and backup strategies across multi‑AZ and multi‑region AWS setups.
- Implement Infrastructure as Code (IaC) using Terraform or CloudFormation for consistent, repeatable provisioning of AI environments.
- Collaborate with AI/ML teams to support LLM fine‑tuning, prompt orchestration, inference endpoints, and model deployment workflows.
- Stay current with AWS GenAI advancements, evaluating new services, architectural patterns, and best practices for enterprise adoption.
SKILLS
Must have
- Extensive experience (typically 7+ years) in cloud architecture, infrastructure engineering, or platform engineering, with a strong focus on AWS.
- Proven expertise designing and operating AI/ML and Generative AI infrastructure at scale.
- Deep knowledge of AWS services relevant to AI workloads (SageMaker, Bedrock, EKS, EC2 GPU instances, Lambda, VPC, IAM, KMS, S3).
- Hands‑on experience with GPU compute, distributed training, and high‑performance inference environments.
- Strong understanding of MLOps/LLMOps practices, CI/CD pipelines, and model deployment workflows.
- Experience architecting secure, compliant, and highly available cloud environments.
- Proficiency with Infrastructure as Code (Terraform or CloudFormation).
- Familiarity with vector databases, RAG architectures, and scalable data storage patterns.
- Strong collaboration skills and the ability to work closely with AI/ML, DevOps, and engineering teams.
- Excellent documentation and communication skills.
AI Infra Architecture in London employer: Luxoft
Contact Detail:
Luxoft Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AI Infra Architecture in London
✨Network Like a Pro
Get out there and connect with folks in the AI and cloud architecture space. Attend meetups, webinars, or even online forums. You never know who might have the inside scoop on job openings or can put in a good word for you!
✨Show Off Your Skills
When you get the chance to chat with potential employers, make sure to highlight your hands-on experience with AWS and GPU compute. Share specific examples of projects you've worked on that relate to Generative AI and LLM workloads. This will help you stand out from the crowd!
✨Tailor Your Approach
Before any interview, do your homework! Research the company’s current AI initiatives and think about how your skills can contribute. Tailoring your conversation to their needs shows you're genuinely interested and ready to hit the ground running.
✨Apply Through Us!
Don’t forget to check out our website for the latest job openings. Applying directly through us not only gives you a better chance but also keeps you in the loop about new opportunities in the AI infrastructure space. Let’s land that dream job together!
We think you need these skills to ace AI Infra Architecture in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with AWS and AI infrastructure. We want to see how your skills match the job description, so don’t be shy about showcasing your relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about AI infrastructure and how your background makes you the perfect fit for this role. Let us know what excites you about working with Generative AI.
Showcase Your Technical Skills: When filling out your application, be specific about your technical expertise. Mention your hands-on experience with GPU compute, MLOps, and any relevant AWS services. We love seeing concrete examples of your work!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!
How to prepare for a job interview at Luxoft
✨Know Your AWS Inside Out
Make sure you brush up on your AWS knowledge, especially the services mentioned in the job description like SageMaker, EKS, and Lambda. Be ready to discuss how you've used these tools in past projects, as this will show your hands-on experience and understanding of cloud-native AI infrastructure.
✨Showcase Your MLOps Expertise
Prepare to talk about your experience with MLOps/LLMOps practices. Have specific examples ready that demonstrate how you've built and governed pipelines using tools like SageMaker Pipelines or CodePipeline. This will highlight your ability to implement best practices in model deployment workflows.
✨Demonstrate Problem-Solving Skills
Think of scenarios where you've had to optimise infrastructure for performance or cost efficiency. Be ready to discuss your strategies for using Spot Instances or right-sizing, as well as how you’ve tackled challenges in distributed training or low-latency inference environments.
✨Collaboration is Key
Since this role involves working closely with AI/ML teams, prepare to share examples of successful collaborations. Highlight how you’ve communicated complex technical concepts to non-technical stakeholders and ensured seamless implementation across multidisciplinary teams.