At a Glance
- Tasks: Join Huawei's Serverless LLM team to innovate in AI infrastructure and cloud computing.
- Company: Huawei, a global leader in ICT with a mission for a fully connected world.
- Benefits: 33 days annual leave, private medical insurance, and a supportive learning environment.
- Why this job: Be at the forefront of AI technology and make a real impact on global services.
- Qualifications: Experience in LLM optimization and understanding of distributed systems required.
- Other info: Collaborate with experts and enjoy excellent career growth opportunities.
The predicted salary is between 48000 - 72000 £ per year.
Join to apply for the Serverless LLM Architect role at Huawei Technologies Research & Development (UK) Ltd.
Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices. Our vision and mission is to bring digital to every person, home and organization for a fully connected, intelligent world. To this end, we will drive ubiquitous connectivity and promote equal access to networks; bring cloud and artificial intelligence to all four corners of the earth to provide superior computing power where you need it, when you need it; build digital platforms to help all industries and organizations become more agile, efficient, and dynamic; redefine user experience with AI, making it more personalized for people in all aspects of their life.
As a pioneer in global technological innovation, Huawei is committed to advancing the development of information technologies and has made remarkable achievements in server and device services.
Joining the Huawei Serverless LLM team, you will be in cutting-edge fields such as AI infrastructure, data systems, artificial intelligence, and cloud computing. You will work side by side with global expert teams to meet hundreds of millions of service requirements.
Key Responsibilities:- Use serverless methods to ensure excellent performance of the LLM service in high-concurrency scenarios, optimize the response speed and resource consumption of the LLM service, and achieve high throughput and low latency in inference.
- Explore the next-generation distributed inference engine to ensure high reliability, scalability, and O&M convenience of the system and support large-scale LLM commercial use in the future.
- Track the latest LLM optimization technology to ensure model performance while effectively reducing computing costs, improving loading efficiency, and achieving ultimate system throughput.
- Identify and define future-oriented technical challenges in the serverless LLM field, and enhance technical communication and cooperation with European academia.
- Work closely with cross-functional teams to participate in the innovation of AI infrastructure, data systems, and cloud computing technologies, and promote the commercial application and implementation of Huawei's serverless LLM architecture.
- Understand the principles and architecture design of LLMs. Have strong experience in LLM optimization and servitization, including technologies for reducing resource consumption and response delay.
- Have a basic command of the distributed system framework and serverless architecture. Have a good command of the core concepts of distributed computing.
- Have experience in designing and optimizing large-scale distributed cluster systems.
- Have a basic command of common serverless technologies such as on-demand invoking, automatic expansion, and load prediction and balancing.
- Be able to independently solve complex technical problems, have the spirit of team leadership and collaboration, be bold in taking responsibilities, and be able to work closely with cross-functional teams to promote the application and commercialization of serverless LLM technology.
- Experience in LLM algorithm optimization is preferred.
- Papers or project achievements related to cutting-edge serverless technologies, and experience in publishing at AI or cloud computing conferences is preferred.
- Familiar with bottom-layer architectures such as distributed systems and OSs is preferred.
- 33 days annual leave entitlement per year (including UK public holidays)
- Group Personal Pension
- Life insurance
- Private medical insurance
- Medical expense claim scheme
- Employee Assistance Program
- Cycle to work scheme
- Company sports club and social events
- Additional time off for learning and development
AI Infrastructure Architect in Edinburgh employer: Huawei Technologies Research & Development (UK) Ltd
Contact Detail:
Huawei Technologies Research & Development (UK) Ltd Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AI Infrastructure Architect in Edinburgh
✨Tip Number 1
Network like a pro! Reach out to current employees at Huawei on LinkedIn or other platforms. Ask them about their experiences and any tips they might have for landing the Serverless LLM Architect role. Personal connections can make a huge difference!
✨Tip Number 2
Prepare for the interview by brushing up on your knowledge of LLMs and serverless architecture. We recommend creating a list of potential questions you might be asked and practicing your answers. Confidence is key, so know your stuff!
✨Tip Number 3
Showcase your passion for innovation! During interviews, share your thoughts on the future of AI infrastructure and how you can contribute to Huawei's mission. This will demonstrate your enthusiasm and alignment with their vision.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets noticed. Plus, it shows you’re serious about joining the team at Huawei. Let’s get you that job!
We think you need these skills to ace AI Infrastructure Architect in Edinburgh
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter for the AI Infrastructure Architect role. Highlight your experience with LLM optimisation and serverless architecture, as this will show us you understand what we're looking for.
Showcase Your Technical Skills: Don’t hold back on detailing your technical expertise! We want to see your knowledge of distributed systems and cloud computing. Use specific examples from your past work to demonstrate how you've tackled similar challenges.
Be Clear and Concise: When writing your application, keep it straightforward. We appreciate clarity, so avoid jargon unless it's necessary. Make sure your key points stand out, so we can easily see why you're a great fit for our team.
Apply Through Our Website: We encourage you to submit your application through our website. It’s the best way for us to receive your details and ensures you’re considered for the role. Plus, it’s super easy to do!
How to prepare for a job interview at Huawei Technologies Research & Development (UK) Ltd
✨Know Your LLMs Inside Out
Make sure you have a solid understanding of the principles and architecture design of large language models (LLMs). Brush up on your experience with LLM optimisation techniques, as you'll likely be asked about how to reduce resource consumption and response delays during the interview.
✨Familiarise Yourself with Serverless Architecture
Since the role focuses on serverless methods, it’s crucial to understand the core concepts of distributed computing and serverless technologies. Be prepared to discuss your experience with on-demand invoking, automatic scaling, and load balancing, as these will be key topics in your interview.
✨Showcase Your Problem-Solving Skills
Huawei values innovation and technical breakthroughs, so come ready to demonstrate your ability to tackle complex technical challenges. Think of examples from your past work where you led a team or collaborated cross-functionally to solve a problem, and be ready to share those stories.
✨Stay Updated on Industry Trends
Keep an eye on the latest advancements in AI infrastructure and cloud computing. Being able to discuss recent developments or papers related to serverless technologies will show your passion for the field and your commitment to staying at the forefront of innovation.