At a Glance
- Tasks: Deploy and optimise advanced machine learning models in production environments.
- Company: Join a forward-thinking tech company focused on innovation and collaboration.
- Benefits: Enjoy competitive salary, flexible work options, and opportunities for professional growth.
- Why this job: Make a real impact by enhancing ML systems and driving infrastructure improvements.
- Qualifications: Experience in deploying ML systems, strong Python skills, and troubleshooting expertise.
- Other info: Dynamic hybrid or remote work environment with excellent career advancement potential.
The predicted salary is between 36000 - 60000 £ per year.
My client is looking for an experienced ML Infrastructure Engineer to support the deployment, optimisation and scaling of advanced machine learning models in production environments. This role sits at the intersection of research and engineering, focused on ensuring models are reliably transitioned from experimentation through to large-scale deployment.
You will work closely with research and platform teams to build and maintain high-performance inference systems, improve deployment processes and help drive infrastructure improvements that enable faster model iteration and release cycles.
- Productionise machine learning models from research through validation, staging and live deployment
- Improve performance and reliability across GPU-based environments
- Design and implement model serving and deployment workflows
- Develop monitoring and observability tools to track system performance, errors and utilisation
- Support data preparation and model integration as part of the wider development lifecycle
- Collaborate with research, engineering and infrastructure teams to improve deployment efficiency and platform scalability
Proven experience deploying and maintaining ML inference systems in production environments.
Strong programming experience in Python and familiarity with modern machine learning frameworks.
Experience supporting GPU workloads and performance optimisation.
Strong troubleshooting skills across performance, scaling and system reliability.
Experience building or improving model serving infrastructure.
Understanding of distributed training or inference techniques.
Experience debugging low-level performance or hardware-related issues.
Exposure to real-time or latency-sensitive ML applications.
Machine Learning Engineer (hybrid or remote) in London employer: Block MB
Contact Detail:
Block MB Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Machine Learning Engineer (hybrid or remote) in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups or webinars, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your machine learning projects, especially those involving deployment and optimisation. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for technical interviews by brushing up on your Python skills and familiarising yourself with ML frameworks. Practice coding challenges and system design questions that focus on model serving and infrastructure improvements.
✨Tip Number 4
Don’t forget to apply through our website! We’ve got loads of opportunities waiting for talented individuals like you. Plus, it’s a great way to ensure your application gets seen by the right people.
We think you need these skills to ace Machine Learning Engineer (hybrid or remote) in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with ML infrastructure and deployment. We want to see how you've tackled challenges in production environments, so don’t hold back on those details!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about machine learning and how your skills align with our needs. We love seeing genuine enthusiasm for the role.
Showcase Relevant Projects: If you've worked on any projects related to model serving or GPU optimisation, make sure to mention them. We’re keen to see real-world examples of your work and how you’ve contributed to successful deployments.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!
How to prepare for a job interview at Block MB
✨Know Your ML Models Inside Out
Make sure you can discuss the machine learning models you've worked with in detail. Be prepared to explain how you productionised them, the challenges you faced, and how you optimised their performance. This shows your hands-on experience and understanding of the entire lifecycle.
✨Brush Up on Your Python Skills
Since strong programming experience in Python is crucial for this role, ensure you're comfortable discussing your coding practices. You might be asked to solve a problem or even write some code during the interview, so practice common algorithms and data structures relevant to ML.
✨Familiarise Yourself with Deployment Workflows
Understand the end-to-end deployment process for ML models. Be ready to talk about how you've designed and implemented model serving workflows in the past. Highlight any tools or frameworks you've used to improve deployment efficiency and scalability.
✨Prepare for Technical Troubleshooting Questions
Expect questions that test your troubleshooting skills, especially around performance and reliability in GPU-based environments. Think of specific examples where you identified and resolved issues, and be ready to discuss the techniques you used to debug low-level performance problems.