At a Glance
- Tasks: Design and build core components of distributed AI systems for cutting-edge projects.
- Company: Leading advanced technology research centre in the UK, focused on AI infrastructure.
- Benefits: Work with top experts, tackle complex challenges, and shape the future of AI.
- Other info: Dynamic environment with opportunities for innovation and collaboration.
- Why this job: Join a pioneering team transforming research into real-world AI solutions.
- Qualifications: BSc/MSc in Computer Science or related field; strong programming and systems knowledge.
The predicted salary is between 60000 - 80000 £ per year.
We're partnered with a leading advanced technology research centre in the UK, focused on building next-generation AI-native infrastructure. They're seeking a Systems Research Engineer for AI Infrastructure and Distributed Systems to join them onsite in Edinburgh on a permanent basis.
As large language models continue to reshape the software stack, this team is pioneering scalable, high-performance systems for training and serving AI at data centre scale. Sitting at the intersection of cutting-edge research and real-world deployment, the group transforms novel system architectures into production-ready technologies that will define the future of distributed AI.
This is an excellent opportunity for engineers with a strong systems background who want to work on complex, research-driven challenges in distributed infrastructure, AI serving, and performance optimisation.
The Role
You will design and build core components of distributed AI systems, working across infrastructure layers to improve scalability, efficiency, and performance of large-scale model serving environments.
Key Responsibilities:
- Distributed Systems Engineering
- Design, implement, and evaluate distributed system components for AI and data-intensive workloads
- Build scalable infrastructure across heterogeneous environments (CPU, GPU, accelerators)
- Develop advanced scheduling and serving systems for large-scale AI workloads
- Performance Optimisation
- Profile and optimise large-scale inference pipelines
- Improve key-value cache efficiency and memory scheduling
- Identify bottlenecks and enhance system scalability using systematic performance analysis
- AI Serving Infrastructure
- Develop low-latency, multi-tenant, fault-tolerant model serving systems
- Work on areas such as cache sharing, data locality, and cluster scheduling
- Prototype and evaluate next-generation inference architectures
- Research & Innovation
- Contribute to cutting-edge systems and ML research
- Publish at leading conferences and drive internal adoption of new approaches
- Collaboration
- Work closely with global research and engineering teams
- Communicate technical findings and system insights clearly
Requirements
- BSc or MSc in Computer Science, Electrical Engineering, or related field
- Strong fundamentals in distributed systems and operating systems
- Experience with machine learning systems and AI inference infrastructure
- Hands-on experience with LLM serving frameworks
- Strong programming skills in C/C++
- Python for prototyping and experimentation
- Experience with performance profiling and optimisation tools
- Solid understanding of distributed algorithms
Nice to Have
- PhD in systems, distributed computing, or AI infrastructure
- Publications in top-tier systems or ML conferences
- Experience with load balancing, fault tolerance, and resource scheduling
- Background in large-scale cloud or AI infrastructure environments
Why Apply?
- Work on cutting-edge AI infrastructure challenges at scale
- Bridge research and real-world system deployment
- Collaborate with leading experts in distributed systems and AI
- Shape the future of large-scale AI systems
In accordance with local employment laws, applicants must have current, valid authorisation to work in the UK at the time of application. We are unable to sponsor employment visas for this role. Applications from individuals without existing work authorisation for the UK cannot be considered.
Systems Research Engineer - AI Infrastructure / Distributed Systems in Liverpool employer: European Tech Recruit
Contact Detail:
European Tech Recruit Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Systems Research Engineer - AI Infrastructure / Distributed Systems in Liverpool
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects related to distributed systems and AI infrastructure. This gives potential employers a taste of what you can do beyond your CV.
✨Tip Number 3
Prepare for technical interviews by brushing up on your knowledge of distributed algorithms and performance optimisation techniques. Practice coding challenges and system design questions to boost your confidence.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, it shows you’re genuinely interested in joining our team and tackling those cutting-edge AI challenges.
We think you need these skills to ace Systems Research Engineer - AI Infrastructure / Distributed Systems in Liverpool
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with distributed systems and AI infrastructure. We want to see how your skills align with the role, so don’t be shy about showcasing relevant projects or technologies you've worked with!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about AI and distributed systems. We love seeing candidates who can connect their personal interests with our mission at StudySmarter.
Showcase Your Technical Skills: Be specific about your programming skills in C/C++ and Python. If you’ve worked with LLM serving frameworks or performance optimisation tools, let us know! We’re keen on candidates who can hit the ground running.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity. Don’t miss out!
How to prepare for a job interview at European Tech Recruit
✨Know Your Systems Inside Out
Make sure you brush up on your knowledge of distributed systems and AI infrastructure. Be ready to discuss specific projects you've worked on, especially those involving large-scale model serving or performance optimisation. This will show that you not only understand the theory but also have practical experience.
✨Showcase Your Problem-Solving Skills
Prepare to tackle hypothetical scenarios during the interview. Think about how you would identify bottlenecks in a system or improve cache efficiency. Practising these problem-solving questions can help you articulate your thought process clearly, which is crucial for this role.
✨Familiarise Yourself with Their Tech Stack
Research the tools and technologies used by the company, especially those related to AI inference frameworks and performance profiling. If you have experience with similar tools, be sure to mention it. This shows that you're proactive and genuinely interested in the position.
✨Communicate Clearly and Confidently
During the interview, focus on communicating your technical findings and insights in a clear manner. Practice explaining complex concepts in simple terms, as this will demonstrate your ability to collaborate effectively with both technical and non-technical teams.