At a Glance
- Tasks: Join our team to enhance ML model inference for search workflows and improve scalability.
- Company: Elastic is a leading Search AI company, empowering businesses with real-time data insights.
- Benefits: Enjoy flexible work schedules, competitive pay, health coverage, and generous vacation days.
- Why this job: Be part of a collaborative team driving innovation in AI and search technology.
- Qualifications: 5+ years in MLOps, experience with LLMs, and strong communication skills required.
- Other info: Diversity is key at Elastic; we welcome all backgrounds and perspectives.
The predicted salary is between 48000 - 84000 ÂŁ per year.
Elastic, the Search AI Company, enables everyone to find the answers they need in real time, using all their data, at scale — unleashing the potential of businesses and people. The Elastic Search AI Platform, used by more than 50% of the Fortune 500, brings together the precision of search and the intelligence of AI to enable everyone to accelerate the results that matter. By taking advantage of all structured and unstructured data — securing and protecting private information more effectively — Elastic’s complete, cloud-based solutions for search, security, and observability help organizations deliver on the promise of AI.
The Search Inference team is responsible for bringing performant, ergonomic, and cost effective machine learning (ML) model inference to Search workflows. ML inference has become a crucial part of the modern search experience whether used for query understanding, semantic search, RAG, or any other GenAI use-case. Our goal is to simplify ML inference in Search workflows by focusing on large scale inference capabilities for embeddings and reranking models that are available across the Elasticsearch user base. As a team, we are a collaborative, cross-functional group with backgrounds in information retrieval, natural language processing, and distributed systems. We work with Go microservices, Python, Ray Serve, Kubernetes/KubeRay, and work on AWS, GCP & Azure. We provide thought leadership across a variety of mediums including open code repositories, publishing blogs, and speaking at conferences. We focus on matching the expectations of our customers along the lines of throughput, latency, and cost.
We’re seeking an experienced ML Ops Engineer to help us deliver on this vision.
What You Will Be Doing
- Working with the team (and other teams) to evolve our inference service so it may host LLMs in addition to existing models (ELSER, E5, Rerank).
- Enhancing the scalability and reliability of the service and work with the team to ensure knowledge is shared and best practices are followed.
- Improving the cost and efficiency of the platform, making the best use of available infrastructure.
- Adapting existing solutions to use our inference service, ensuring a seamless transition.
What You Bring
- 5+ years working in an MLOps or related ML Engineering role.
- Production experience self-hosting & operating LLMs at scale for generative tasks via an inference framework such as Ray or KServe (or similar).
- Production experience with running and tuning specialized hardware for Generative AI workloads, especially GPUs via CUDA.
- Measured and articulate written and spoken communication skills. You work well with others and can craft concise and expressive thoughts into correspondence: emails, issues, investigations, documentation, onboarding materials, and so on.
- An interest in learning new tools, workflows and philosophies that can help you grow. You can function well in an environment that drives towards change.
This role has tremendous opportunities for growth!
Please include whatever info you believe is relevant in your application: resume, GitHub profile, code samples, blog posts and writing samples, links to personal projects, etc.
Additional Information - We Take Care of Our People
As a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do. We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.
- Competitive pay based on the work you do here and not your previous salary.
- Health coverage for you and your family in many locations.
- Ability to craft your calendar with flexible locations and schedules for many roles.
- Generous number of vacation days each year.
- Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service.
- Up to 40 hours each year to use toward volunteer projects you love.
- Embracing parenthood with minimum of 16 weeks of parental leave.
Different people approach problems differently. We need that. Elastic is an equal opportunity employer and is committed to creating an inclusive culture that celebrates different perspectives, experiences, and backgrounds. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, pregnancy, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, disability status, or any other basis protected by federal, state or local law, ordinance or regulation. We welcome individuals with disabilities and strive to create an accessible and inclusive experience for all individuals. To request an accommodation during the application or the recruiting process, please email candidate_accessibility@elastic.co. We will reply to your request within 24 business hours of submission.
Applicants have rights under Federal Employment Laws, view posters linked below: Family and Medical Leave Act (FMLA) Poster; Pay Transparency Nondiscrimination Provision Poster; Employee Polygraph Protection Act (EPPA) Poster and Know Your Rights (Poster).
Elasticsearch develops and distributes encryption software and technology that is subject to U.S. export controls and licensing requirements for individuals who are located in or are nationals of the following sanctioned countries and regions: Belarus, Cuba, Iran, North Korea, Russia, Syria, the Crimea Region of Ukraine, the Donetsk People’s Republic (“DNR”), and the Luhansk People’s Republic (“LNR”). If you are located in or are a national of one of the listed countries or regions, an export license may be required as a condition of your employment in this role. Please note that national origin and/or nationality do not affect eligibility for employment with Elastic. Please see here for our Privacy Statement.
Search - Search Inference - Senior MLOps Engineer employer: Elasticsearch B.V.
Contact Detail:
Elasticsearch B.V. Recruiting Team
candidate_accessibility@elastic.co
StudySmarter Expert Advice 🤫
We think this is how you could land Search - Search Inference - Senior MLOps Engineer
✨Tip Number 1
Familiarise yourself with the technologies mentioned in the job description, such as Go microservices, Python, Ray Serve, and Kubernetes. Having hands-on experience or projects showcasing these skills can set you apart during discussions.
✨Tip Number 2
Engage with the community by contributing to open-source projects related to MLOps or search technologies. This not only enhances your skills but also demonstrates your commitment and passion for the field.
✨Tip Number 3
Prepare to discuss your previous experiences with LLMs and generative tasks. Be ready to share specific examples of how you've improved scalability and efficiency in past roles, as this aligns closely with what the team is looking for.
✨Tip Number 4
Showcase your communication skills by preparing to articulate complex technical concepts clearly. This will be crucial during interviews, especially when discussing your collaborative work with cross-functional teams.
We think you need these skills to ace Search - Search Inference - Senior MLOps Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in MLOps and machine learning engineering. Focus on your production experience with LLMs, specialized hardware, and any specific tools mentioned in the job description, such as Ray or KServe.
Craft a Strong Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention how your skills align with the responsibilities of the position, particularly in enhancing scalability and reliability of ML services.
Showcase Your Work: Include links to your GitHub profile, code samples, and any blog posts or writing samples that demonstrate your expertise in MLOps and machine learning. This will help illustrate your practical experience and thought leadership.
Highlight Communication Skills: Since the role requires strong written and spoken communication skills, provide examples of how you've effectively communicated complex ideas in previous roles. This could be through documentation, presentations, or collaborative projects.
How to prepare for a job interview at Elasticsearch B.V.
✨Showcase Your MLOps Experience
Be prepared to discuss your previous roles in MLOps or ML Engineering. Highlight specific projects where you self-hosted and operated LLMs at scale, as well as your experience with inference frameworks like Ray or KServe.
✨Demonstrate Technical Proficiency
Familiarise yourself with the technologies mentioned in the job description, such as Go microservices, Python, Kubernetes, and cloud platforms like AWS, GCP, and Azure. Be ready to answer technical questions and possibly solve problems on the spot.
✨Communicate Clearly
Since strong communication skills are essential for this role, practice articulating your thoughts clearly and concisely. Prepare to discuss your written communication, such as documentation or blog posts, to showcase your ability to convey complex ideas effectively.
✨Emphasise Collaboration
Elastic values teamwork, so be ready to share examples of how you've worked collaboratively in cross-functional teams. Discuss how you’ve shared knowledge and best practices in previous roles, as this aligns with their focus on a collaborative work environment.