At a Glance
- Tasks: Design and enhance scalable ML platforms for cutting-edge GenAI initiatives.
- Company: Leading delivery platform with a mission to impact millions positively.
- Benefits: 33 days PTO, diverse team, and sunny Dubai location.
- Why this job: Join a dynamic team and work on innovative ML and GenAI projects.
- Qualifications: Strong software engineering skills and experience in ML infrastructure.
- Other info: Collaborative environment with opportunities for professional growth.
The predicted salary is between 36000 - 60000 £ per year.
As the leading delivery platform in the region, we have a unique responsibility and opportunity to positively impact millions of customers, restaurant partners, and riders. To achieve our mission, we must scale and continuously evolve our machine learning capabilities, including cutting-edge GenAI initiatives. This demands robust, efficient, and scalable ML platforms that empower our teams to rapidly develop, deploy, and operate intelligent systems.
As an ML Platform Engineer, your mission is to design, build, and enhance the infrastructure and tooling that accelerates the development, deployment, and monitoring of traditional ML and GenAI models at scale. You’ll collaborate closely with data scientists, ML engineers, GenAI specialists, and product teams to deliver seamless ML workflows—from experimentation to production serving—ensuring operational excellence across our ML and GenAI systems.
What’s On Your Plate?
- Design, build, and maintain scalable, reusable, and reliable ML platforms and tooling that support the entire ML lifecycle, including data ingestion, model training, evaluation, deployment, and monitoring for both traditional and generative AI models.
- Develop standardized ML workflows and templates using MLFlow and other platforms, enabling rapid experimentation and deployment cycles.
- Implement robust CI/CD pipelines, Docker containerization, model registries, and experiment tracking to support reproducibility, scalability, and governance in ML and GenAI.
- Collaborate closely with GenAI experts to integrate and optimize GenAI technologies, including transformers, embeddings, vector databases (e.g., Pinecone, Redis, Weaviate), and real-time retrieval-augmented generation (RAG) systems.
- Automate and streamline ML and GenAI model training, inference, deployment, and versioning workflows, ensuring consistency, reliability, and adherence to industry best practices.
- Ensure reliability, observability, and scalability of production ML and GenAI workloads by implementing comprehensive monitoring, alerting, and continuous performance evaluation.
- Integrate infrastructure components such as real-time model serving frameworks (e.g., TensorFlow Serving, NVIDIA Triton, Seldon), Kubernetes orchestration, and cloud solutions (AWS/GCP) for robust production environments.
- Drive infrastructure optimization for generative AI use-cases, including efficient inference techniques (batching, caching, quantization), fine-tuning, prompt management, and model updates at scale.
- Partner with data engineering, product, infrastructure, and GenAI teams to align ML platform initiatives with broader company goals, infrastructure strategy, and innovation roadmap.
- Contribute actively to internal documentation, onboarding, and training programs, promoting platform adoption and continuous improvement.
What did we order?
Technical Experience:
- Strong software engineering background with experience in building distributed systems or platforms designed for machine learning and AI workloads.
- Expert-level proficiency in Python and familiarity with ML frameworks (TensorFlow, PyTorch), infrastructure tooling (MLflow, Kubeflow, Ray), and popular APIs (Hugging Face, OpenAI, LangChain).
- Experience implementing modern MLOps practices, including model lifecycle management, CI/CD, Docker, Kubernetes, model registries, and infrastructure-as-code tools (Terraform, Helm).
- Demonstrated experience working with cloud infrastructure, ideally AWS or GCP, including Kubernetes clusters (GKE/EKS), serverless architectures, and managed ML services (e.g., Vertex AI, SageMaker).
- Proven experience with generative AI technologies: transformers, embeddings, prompt engineering strategies, fine-tuning vs. prompt-tuning, vector databases, and retrieval-augmented generation (RAG) systems.
- Experience designing and maintaining real-time inference pipelines, including integrations with feature stores, streaming data platforms (Kafka, Kinesis), and observability platforms.
- Familiarity with SQL and data warehouse modeling; capable of managing complex data queries, joins, aggregations, and transformations.
- Solid understanding of ML monitoring, including identifying model drift, decay, latency optimization, cost management, and scaling API-based GenAI applications efficiently.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field; advanced degree is a plus.
- 2+ years in a tech lead role, 5+ years of experience in ML platform engineering, ML infrastructure, generative AI, or closely related roles.
- Proven track record of successfully building and operating ML infrastructure at scale, ideally supporting generative AI use-cases and complex inference scenarios.
- Strategic mindset with strong problem-solving skills and effective technical decision-making abilities.
- Excellent communication and collaboration skills, comfortable working cross-functionally across diverse teams and stakeholders.
- Strong sense of ownership, accountability, pragmatism, and proactive bias for action.
Why join us in Dubai?
365 days of sun + 33 days PTO. A diverse, international team of top talent.
MLOps Platform Engineer employer: talabat
Contact Detail:
talabat Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land MLOps Platform Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those related to MLOps and GenAI. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common technical questions and scenarios related to ML platforms. Practice explaining your thought process clearly, as communication is key when collaborating with cross-functional teams.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team!
We think you need these skills to ace MLOps Platform Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the MLOps Platform Engineer role. Highlight your expertise in Python, ML frameworks, and any relevant projects you've worked on. We want to see how you can contribute to our mission!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about machine learning and generative AI. Share specific examples of your past work and how it aligns with what we do at StudySmarter. Let us know why you’re the perfect fit!
Showcase Your Projects: If you've got any personal or professional projects related to MLOps or generative AI, don’t hold back! Include links or descriptions in your application. We love seeing practical applications of your skills and how you tackle real-world problems.
Apply Through Our Website: We encourage you to apply directly through our website for the best chance of getting noticed. It’s super easy, and you’ll be able to keep track of your application status. Plus, we love seeing candidates who take the initiative!
How to prepare for a job interview at talabat
✨Know Your Tech Inside Out
Make sure you’re well-versed in the technologies mentioned in the job description, like Python, TensorFlow, and Kubernetes. Brush up on your knowledge of MLOps practices and be ready to discuss how you've implemented CI/CD pipelines or worked with cloud services like AWS or GCP.
✨Showcase Your Collaboration Skills
Since this role involves working closely with data scientists and GenAI specialists, prepare examples that highlight your teamwork. Think about times when you’ve successfully collaborated on projects, especially those involving ML workflows or generative AI technologies.
✨Prepare for Technical Questions
Expect technical questions that dive deep into your experience with ML platforms and infrastructure. Be ready to explain your approach to building scalable systems, managing model lifecycles, and optimising performance. Practise articulating your thought process clearly.
✨Demonstrate Your Problem-Solving Mindset
The company values strategic thinking and problem-solving skills. Prepare to discuss challenges you've faced in previous roles and how you overcame them, particularly in relation to ML infrastructure or generative AI use cases. Show them you can think critically and act decisively.