At a Glance
- Tasks: Design and build scalable ML platforms for cutting-edge AI projects.
- Company: Leading delivery platform with a mission to impact millions.
- Benefits: 33 days PTO, competitive salary, and a vibrant work culture in Dubai.
- Why this job: Join a diverse team and work on innovative GenAI technologies.
- Qualifications: Experience in ML engineering and strong Python skills required.
- Other info: Opportunity for career growth in a dynamic, tech-driven environment.
The predicted salary is between 48000 - 72000 £ per year.
As the leading delivery platform in the region, we have a unique responsibility and opportunity to positively impact millions of customers, restaurant partners, and riders. To achieve our mission, we must scale and continuously evolve our machine learning capabilities, including cutting-edge GenAI initiatives. This demands robust, efficient, and scalable ML platforms that empower our teams to rapidly develop, deploy, and operate intelligent systems.
As an ML Platform Engineer, your mission is to design, build, and enhance the infrastructure and tooling that accelerates the development, deployment, and monitoring of traditional ML and GenAI models at scale. You will collaborate closely with data scientists, ML engineers, GenAI specialists, and product teams to deliver seamless ML workflows—from experimentation to production serving—ensuring operational excellence across our ML and GenAI systems.
What’s On Your Plate?
- Design, build, and maintain scalable, reusable, and reliable ML platforms and tooling that support the entire ML lifecycle, including data ingestion, model training, evaluation, deployment, and monitoring for both traditional and generative AI models.
- Develop standardized ML workflows and templates using MLFlow and other platforms, enabling rapid experimentation and deployment cycles.
- Implement robust CI/CD pipelines, Docker containerization, model registries, and experiment tracking to support reproducibility, scalability, and governance in ML and GenAI.
- Collaborate closely with GenAI experts to integrate and optimize GenAI technologies, including transformers, embeddings, vector databases (e.g., Pinecone, Redis, Weaviate), and real-time retrieval-augmented generation (RAG) systems.
- Automate and streamline ML and GenAI model training, inference, deployment, and versioning workflows, ensuring consistency, reliability, and adherence to industry best practices.
- Ensure reliability, observability, and scalability of production ML and GenAI workloads by implementing comprehensive monitoring, alerting, and continuous performance evaluation.
- Integrate infrastructure components such as real-time model serving frameworks (e.g., TensorFlow Serving, NVIDIA Triton, Seldon), Kubernetes orchestration, and cloud solutions (AWS/GCP) for robust production environments.
- Drive infrastructure optimization for generative AI use-cases, including efficient inference techniques (batching, caching, quantization), fine-tuning, prompt management, and model updates at scale.
- Partner with data engineering, product, infrastructure, and GenAI teams to align ML platform initiatives with broader company goals, infrastructure strategy, and innovation roadmap.
- Contribute actively to internal documentation, onboarding, and training programs, promoting platform adoption and continuous improvement.
What did we order?
Technical Experience:
- Strong software engineering background with experience in building distributed systems or platforms designed for machine learning and AI workloads.
- Expert-level proficiency in Python and familiarity with ML frameworks (TensorFlow, PyTorch), infrastructure tooling (MLflow, Kubeflow, Ray), and popular APIs (Hugging Face, OpenAI, LangChain).
- Experience implementing modern MLOps practices, including model lifecycle management, CI/CD, Docker, Kubernetes, model registries, and infrastructure-as-code tools (Terraform, Helm).
- Demonstrated experience working with cloud infrastructure, ideally AWS or GCP, including Kubernetes clusters (GKE/EKS), serverless architectures, and managed ML services (e.g., Vertex AI, SageMaker).
- Proven experience with generative AI technologies: transformers, embeddings, prompt engineering strategies, fine-tuning vs. prompt-tuning, vector databases, and retrieval-augmented generation (RAG) systems.
- Experience designing and maintaining real-time inference pipelines, including integrations with feature stores, streaming data platforms (Kafka, Kinesis), and observability platforms.
- Familiarity with SQL and data warehouse modeling; capable of managing complex data queries, joins, aggregations, and transformations.
- Solid understanding of ML monitoring, including identifying model drift, decay, latency optimization, cost management, and scaling API-based GenAI applications efficiently.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree is a plus.
- 2+ years in a tech lead role, 5+ years of experience in ML platform engineering, ML infrastructure, generative AI, or closely related roles.
- Proven track record of successfully building and operating ML infrastructure at scale, ideally supporting generative AI use-cases and complex inference scenarios.
- Strategic mindset with strong problem-solving skills and effective technical decision-making abilities.
- Excellent communication and collaboration skills, comfortable working cross-functionally across diverse teams and stakeholders.
- Strong sense of ownership, accountability, pragmatism, and proactive bias for action.
Why join us in Dubai?
365 days of sun + 33 days PTO. A diverse, international team of top talent.
MLOps Platform Engineer in London employer: talabat
Contact Detail:
talabat Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land MLOps Platform Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those related to ML and GenAI. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and soft skills. Practice common interview questions and be ready to discuss your past experiences in detail. Confidence is key!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are genuinely interested in joining our team.
We think you need these skills to ace MLOps Platform Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the MLOps Platform Engineer role. Highlight your expertise in Python, ML frameworks, and any relevant projects you've worked on. We want to see how you can contribute to our mission!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about machine learning and generative AI. Share specific examples of your past work and how it aligns with what we do at StudySmarter. Let us know why you’re the perfect fit!
Showcase Your Projects: If you've got any personal or professional projects related to MLOps or generative AI, don’t hold back! Include links or descriptions in your application. We love seeing practical applications of your skills and how you tackle real-world problems.
Apply Through Our Website: We encourage you to apply directly through our website for the best chance of getting noticed. It’s super easy, and you’ll be able to keep track of your application status. Plus, we love seeing candidates who take the initiative!
How to prepare for a job interview at talabat
✨Know Your Tech Inside Out
Make sure you’re well-versed in the technologies mentioned in the job description, like Python, TensorFlow, and Kubernetes. Brush up on your MLOps practices and be ready to discuss how you've implemented CI/CD pipelines or worked with cloud infrastructure.
✨Showcase Your Collaboration Skills
Since this role involves working closely with data scientists and GenAI specialists, prepare examples of past collaborations. Highlight how you’ve contributed to team projects and how you can bridge gaps between technical and non-technical teams.
✨Prepare for Scenario-Based Questions
Expect questions that ask you to solve real-world problems related to ML platforms. Think about challenges you've faced in previous roles and how you overcame them, especially in areas like model monitoring and infrastructure optimisation.
✨Demonstrate Your Passion for Innovation
This role is all about evolving machine learning capabilities. Be ready to discuss any personal projects or research you've done in generative AI or other cutting-edge technologies. Show them you’re not just a techie but also someone who’s excited about the future of ML.