At a Glance
- Tasks: Develop and support our LLM-driven autonomous platform using cutting-edge AI technologies.
- Company: Anecdote is an innovative AI-first startup transforming customer feedback analysis for major brands.
- Benefits: Enjoy fully remote work, flexible hours, generous vacation, and stock options.
- Why this job: Join a dynamic team making a real impact in AI and customer experience.
- Qualifications: Proficiency in Python and experience with LangChain, AutoGen, or CrewAI required.
- Other info: Be part of a fast-paced environment with opportunities for continuous learning and growth.
The predicted salary is between 36000 - 60000 £ per year.
We’re looking for a Gen AI Scientist to develop, scale, and support our LLM-driven autonomous platform. You’ll work with LangChain, AutoGen, CrewAI, and deploy open-source models (LLaMA, DeepSeek) in the Google Cloud.
About the company: Anecdote is an innovative, AI-first startup revolutionising how companies analyse customer feedback. Our AI-powered platform consolidates feedback from app reviews, support chats, surveys, and social media into a single, easily accessible space. This enables companies like Grubhub, Dropbox, and Careem to derive actionable insights and deliver a better, real-time customer experience that drives sustainable growth. We are backed by top investors, having raised $3.5m to date.
Responsibilities:
- Develop, scale, and support our LLM agentic system platform.
- Design and implement AI-driven autonomous workflows, enabling seamless human-AI interaction.
- Build and deploy open-source models in cloud environments, optimising inference and serving costs.
- Improve and maintain data pipeline reliability and participate in on-call rotations.
- Debug and fix issues in ML pipelines, even when the cause is obscure.
- Collaborate with cross-functional teams to integrate AI models into production systems.
- Clearly articulate the work you’ve done and the impact you’ve made.
We are early stage, so the work is dynamic and evolving. Examples of additional challenges you might tackle:
- Make things work. Even the hardest things.
- Deploy AI models in scalable and cost-efficient ways.
- Optimise prompts, refine model outputs, and experiment with novel prompting strategies.
- Implement backend endpoints to bridge AI capabilities into our production stack.
- Label data and refine model training workflows.
- Hire and manage part-time annotators to improve data quality.
- Create quick prototypes using Dash/Streamlit to validate concepts.
- Own features end-to-end, from ideation to deployment.
- Be on-call for urgent AI model fixes or system failures.
Qualifications:
- Proficiency in Python and related libraries (e.g., NumPy, SciPy, pandas) is required.
- Strong production experience with at least one framework: LangChain, AutoGen, or CrewAI.
- Deep understanding of agentic systems, autonomous workflows, and LLM-based automation.
- Experience deploying and fine-tuning open-source models (e.g., LLaMA, DeepSeek) in the cloud.
- 5 years of hands-on experience in building, productionising, iterating, and scaling AI-driven pipelines.
- Ability to take projects to completion, unblock yourself, and present results clearly and impactfully.
- Staying on top of recent trends, with hands-on experience in fine-tuning LLMs beyond API comparisons.
- Strong knowledge of software engineering, including building scalable web services and APIs.
- Experience developing full-stack applications, including database design, API development, admin panel creation, and monitoring systems.
- Experience with GCP is a big plus.
- DevOps experience is a big plus.
- Prompt engineering expertise and creative problem-solving mindset.
- Experience with processing multimodal data (text, images, audio) is a plus.
Perks and Benefits:
- Fully Remote: Work from anywhere with flexible hours.
- In-person Meetups and regular team-building remote events: Enjoy occasional meetups and monthly game sessions.
- Generous Vacation: Take time off when you need it.
- Growth Opportunities: Continuous professional development and learning support.
- Dynamic Culture: Be part of a fast-moving, high-impact team.
- Stock Options: Get equity in our growing startup.
Gen AI Engineer employer: Open Data Science
Contact Detail:
Open Data Science Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Gen AI Engineer
✨Tip Number 1
Familiarise yourself with the specific tools and frameworks mentioned in the job description, such as LangChain, AutoGen, and CrewAI. Having hands-on experience or even personal projects showcasing your skills with these technologies can set you apart from other candidates.
✨Tip Number 2
Stay updated on the latest trends in AI and LLMs. Engage with online communities, attend webinars, or follow relevant blogs to demonstrate your passion and knowledge during interviews. This will show that you're proactive and committed to continuous learning.
✨Tip Number 3
Prepare to discuss your previous projects in detail, especially those involving AI-driven pipelines and cloud deployments. Be ready to articulate the challenges you faced, how you overcame them, and the impact of your work, as this aligns with the role's responsibilities.
✨Tip Number 4
Network with current employees or others in the industry who have experience with similar roles. They can provide insights into the company culture and expectations, which can help you tailor your approach when applying through our website.
We think you need these skills to ace Gen AI Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in AI, particularly with frameworks like LangChain, AutoGen, or CrewAI. Emphasise your proficiency in Python and any hands-on experience with LLMs and cloud deployment.
Craft a Compelling Cover Letter: In your cover letter, express your passion for AI and how your skills align with the responsibilities outlined in the job description. Mention specific projects where you've developed or deployed AI models, showcasing your problem-solving abilities.
Showcase Relevant Projects: Include a portfolio or links to projects that demonstrate your experience with AI-driven pipelines, model fine-tuning, and cloud environments. This will provide concrete evidence of your capabilities and achievements.
Prepare for Technical Questions: Anticipate technical questions related to AI workflows, model deployment, and debugging ML pipelines. Be ready to discuss your approach to problem-solving and how you stay updated with the latest trends in AI.
How to prepare for a job interview at Open Data Science
✨Showcase Your Technical Skills
Be prepared to discuss your proficiency in Python and relevant libraries like NumPy and pandas. Highlight any experience you have with frameworks such as LangChain, AutoGen, or CrewAI, and be ready to provide examples of how you've used them in past projects.
✨Demonstrate Problem-Solving Abilities
Expect to face questions that assess your creative problem-solving skills. Prepare to discuss specific challenges you've encountered in AI model deployment or pipeline issues, and explain how you approached and resolved them.
✨Understand the Company’s Mission
Familiarise yourself with Anecdote's mission to revolutionise customer feedback analysis. Be ready to articulate how your skills and experiences align with their goals, and think about how you can contribute to their innovative platform.
✨Prepare for Collaborative Scenarios
Since the role involves working with cross-functional teams, be prepared to discuss your experience in collaboration. Think of examples where you successfully integrated AI models into production systems and how you communicated your work's impact to stakeholders.