At a Glance
- Tasks: Design and deploy innovative AI systems that transform data into actionable insights.
- Company: Join a fast-growing FinTech company revolutionising data analytics for investors.
- Benefits: Competitive salary, flexible working options, and opportunities for professional growth.
- Why this job: Be at the forefront of AI technology and make a real impact in the finance sector.
- Qualifications: 5+ years in software/ML engineering with hands-on experience in AI systems.
- Other info: Dynamic team environment with a focus on innovation and collaboration.
The predicted salary is between 48000 - 72000 £ per year.
Oxford Data Plan is a fast-growing FinTech company providing alternative data and KPI tracking for 200+ listed companies worldwide. We help fundamental investors make better decisions with proprietary data insights. Founded in 2022, we've grown to over 70 people and are backed by leading investors.
We are hiring an AI Systems Engineer to design, build, and deploy end-to-end AI systems across the organisation—ranging from client-facing AI products to internal tools supporting data science, product, engineering, and revenue teams—on top of robust, scalable AWS infrastructure. This is a hands-on role spanning AI system design, agent architectures, LLM engineering, cloud deployment, and AIOps/MLOps for production reliability. You will work on LLM applications, agentic workflows, RAG systems, ML pipelines, analytics automation, and microservices.
Key Responsibilities- Design, build, and operate end-to-end AI/LLM systems, including chatbots, analytics assistants, automation tools, and decision-support services.
- Develop internal productivity and intelligence tools that accelerate workflows across data science, product, engineering, and revenue teams.
- Build autonomous AI agents and workflow orchestrators using frameworks such as LangChain, CrewAI, ADK, or equivalent systems.
- Design and implement LLM-backed microservices (FastAPI/Flask) for summarisation, intelligence, forecasting, data extraction, and API-driven reasoning.
- Optimise retrieval quality using metadata, hybrid search, chunking strategies, rerankers, and relevance tuning.
- Implement document classification, NER, entity extraction, and knowledge-graph-driven retrieval where appropriate.
- Establish reliability, safety, and governance guardrails across AI systems, including monitoring, error handling, tool-selection controls, and risk mitigation.
- Instrument, monitor, and evaluate AI and RAG systems using logging, metrics, tracing, agent telemetry, quality benchmarks, hallucination testing, and regression tests.
- Deploy and operate AI agents and LLM microservices on AWS (Bedrock, Lambda, ECS/EKS, API Gateway, S3, Secrets Manager, CloudWatch).
- Build and maintain production CI/CD pipelines (GitHub Actions), manage model/version lifecycles, and support retraining and automated evaluation workflows.
- Strong software engineering background with expertise in Python, including modular design, async programming, and modern development practices.
- Experience designing and building APIs and microservices (FastAPI / Flask) for production systems.
- Hands-on experience building and operating production LLM systems, including agentic workflows and RAG pipelines.
- Experience designing and operating RAG systems, including vector databases and retrieval pipelines.
- Experience designing and running LLM evaluations, including task-level metrics, hallucination testing, regression benchmarks, and golden datasets.
- Hands-on familiarity with one or more of LLM observability and evaluation tooling such as OpenTelemetry, LangSmith, Weights & Biases, Arize/Phoenix, or equivalent in-house systems.
- Experience deploying and operating AI systems on AWS (Bedrock, EC2, Lambda, ECS/EKS, API Gateway, S3, CloudWatch), with a focus on reliability, security, and cost-aware production usage.
- Familiarity with Docker, Kubernetes, CI/CD, and continuous deployment in production environments.
- Experience with search and retrieval systems such as AWS Kendra, OpenSearch, Weaviate, Qdrant, or Pinecone.
- Ability to build simple internal-facing UIs or tools (React, Streamlit).
- Experience building reusable SDKs, internal AI platforms, or shared developer frameworks.
- You have 5+ years of overall experience in software engineering/ML engineering, with at least 2 years building GenAI systems in production.
- You ship real production systems, not just prototypes.
- You operate at the intersection of AI, engineering, and operations.
- You think in systems: reliability, observability, cost, and scale.
- You work independently, own problems end-to-end, and simplify complexity.
- You prioritise safety, interpretability, and security in every AI system you build.
Please include a link to your GitHub/portfolio and examples of AI agents, RAG systems, or ML pipeline you have built and deployed.
AI Systems Engineer employer: Oxford Data Plan
Contact Detail:
Oxford Data Plan Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AI Systems Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in the FinTech and AI space on LinkedIn or at industry events. A friendly chat can sometimes lead to job opportunities that aren't even advertised!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those related to AI systems and LLMs. Having tangible examples of your work can really set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common technical questions and scenarios related to AI systems engineering. Practising with a friend or using mock interview platforms can help you feel more confident.
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace AI Systems Engineer
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter for the AI Systems Engineer role. Highlight your experience with LLM systems, AWS, and any relevant projects that showcase your skills. We want to see how you fit into our fast-paced FinTech environment!
Showcase Your Projects: Don’t forget to include links to your GitHub or portfolio! We love seeing real examples of your work, especially any AI agents or ML pipelines you've built. This gives us a taste of your hands-on experience and creativity.
Be Clear and Concise: When writing your application, keep it straightforward. Use clear language and avoid jargon unless it's relevant. We appreciate a well-structured application that gets straight to the point—just like we do in our engineering processes!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets to the right people. Plus, it shows us you're keen on joining our team at Oxford Data Plan!
How to prepare for a job interview at Oxford Data Plan
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, especially Python, AWS, and microservices. Brush up on your experience with FastAPI or Flask, as you'll likely be asked to discuss how you've used these in past projects.
✨Showcase Your Projects
Prepare to share specific examples of AI systems or LLM applications you've built. Bring along your GitHub or portfolio link to demonstrate your hands-on experience. Highlight any production systems you've shipped, focusing on the impact they had.
✨Understand the Business Context
Familiarise yourself with Oxford Data Plan's mission and the FinTech landscape. Be ready to discuss how your role as an AI Systems Engineer can contribute to helping investors make better decisions through data insights.
✨Ask Insightful Questions
Prepare thoughtful questions about the company’s current AI projects, team dynamics, and future goals. This shows your genuine interest in the role and helps you assess if the company is the right fit for you.