At a Glance
- Tasks: Conduct advanced AI research and develop safety mechanisms for next-gen AI systems.
- Company: Join one of the UK's largest banks focused on responsible AI.
- Benefits: Competitive day rate, potential for permanent role, access to advanced tools, and mentorship.
- Why this job: Make a real impact in AI safety while working with cutting-edge technologies.
- Qualifications: Strong research credentials and familiarity with Python-based frameworks required.
- Other info: Collaborative environment with opportunities to publish research and influence strategy.
The predicted salary is between 60000 - 84000 ÂŁ per year.
Day Rate Contract - Option To Convert To Permanent In The Future. Join one of the UK's largest banks building next‑generation AI capabilities with a strong commitment to safe, explainable, and trusted AI. This team is developing cutting‑edge guardrail technologies to ensure AI systems behave reliably across text, voice, and emerging multimodal modalities. This role is ideal for a curious, high‑end thinker (such as a recent Master’s or PhD graduate) with a passion for responsible AI, agentic systems, and the scientific foundations behind guardrail effectiveness. You will work at the intersection of research, model development, and deep validation, contributing to safety frameworks that shape the organisation’s AI strategy.
What You’ll Do
- Research & Explore
- Conduct advanced research into AI guardrails, agentic behaviours, and safe model‑interaction patterns.
- Explore state‑of‑the‑art methods across LLMs, multimodal models, and emerging agent systems.
- Investigate niche areas of AI safety such as unintended behaviours, boundary testing, and robustness.
- Build & Experiment
- Develop prototype models, safety mechanisms, and evaluation tools.
- Build and refine guardrail mechanisms that operate across these modalities.
- Experiment with multimodal inputs including: Text, Voice, Video.
- Deep Testing & Validation
- Design and run high‑depth validation experiments to confirm guardrail effectiveness.
- Stress‑test models for security, misuse, red‑teaming scenarios, and failure boundaries.
- Support development of automated testing frameworks for AI controls.
- Contribute to Responsible AI Strategy
- Help validate controls ensuring AI systems meet internal responsible AI standards.
- Collaborate with engineers, safety specialists, and governance teams.
- Produce high‑quality research insights to guide product and platform direction.
What We’re Looking For
- Strong research credentials (PhD, MPhil, MSc, or equivalent research experience).
- Familiarity with Python‑based research frameworks.
- Strong foundational knowledge in machine learning, foundation models, or multimodal AI.
- Enthusiasm for AI safety, guardrails, and responsible‑AI frameworks.
- Experience building or fine‑tuning models (open‑source or proprietary).
- Ability to design experiments, measure model behaviour, and interpret results.
- Curiosity about AI alignment, agentic behaviour, and interpretability.
- Exposure to LLM or multimodal model evaluation.
Nice to have:
- Experience working with synthetic data, evaluation sets, or adversarial testing.
- Interest in governance, risk, or AI assurance.
Why Join?
This is a rare opportunity to work on advanced AI research within a major organisation deploying AI at enterprise scale. You’ll join a growing research capability, exploring cutting‑edge topics while ensuring AI is developed ethically, responsibly, and with world‑class guardrails.
You’ll benefit from:
- Access to advanced tools and emerging models.
- Opportunities to publish internal research and influence strategic direction.
- Mentorship from experienced AI and safety specialists.
- A collaborative environment that values experimentation and novel thinking.
Senior Machine Learning Researcher in London employer: Caspian One
Contact Detail:
Caspian One Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Machine Learning Researcher in London
✨Tip Number 1
Network like a pro! Reach out to professionals in the AI and machine learning space on LinkedIn or at industry events. Don’t be shy; ask for informational interviews to learn more about their work and share your passion for responsible AI.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects related to AI guardrails and multimodal models. This will not only demonstrate your expertise but also give you something tangible to discuss during interviews.
✨Tip Number 3
Prepare for technical interviews by brushing up on your Python skills and understanding the latest research in AI safety. Practice explaining complex concepts in simple terms, as this will help you connect with interviewers who may not have a deep technical background.
✨Tip Number 4
Apply through our website! We’re always on the lookout for curious minds like yours. Tailor your application to highlight your enthusiasm for AI safety and your research credentials, and don’t forget to follow up after applying to express your interest!
We think you need these skills to ace Senior Machine Learning Researcher in London
Some tips for your application 🫡
Show Your Passion for AI Safety: When writing your application, make sure to highlight your enthusiasm for responsible AI and safety frameworks. We want to see that you’re genuinely interested in the ethical implications of AI and how it can be developed safely.
Tailor Your Experience: Don’t just list your qualifications; connect them to the role! Mention specific projects or research that align with guardrail technologies and AI safety. This helps us see how your background fits into our mission.
Be Clear and Concise: While we love detail, clarity is key! Make sure your application is easy to read and gets straight to the point. Use bullet points where necessary to break down complex information, especially when discussing your research experience.
Apply Through Our Website: We encourage you to submit your application through our website. It’s the best way for us to receive your details and ensures you’re considered for this exciting opportunity. Plus, it’s super easy!
How to prepare for a job interview at Caspian One
✨Know Your AI Stuff
Make sure you brush up on the latest trends in AI, especially around guardrails and safety mechanisms. Be ready to discuss your research experience and how it relates to the role, as well as any specific projects you've worked on that align with their focus on responsible AI.
✨Show Off Your Experimentation Skills
Prepare to talk about your experience designing experiments and testing models. Bring examples of how you've measured model behaviour and interpreted results, especially in relation to multimodal inputs like text and voice. This will show them you're not just a thinker but also a doer.
✨Be Curious and Engaged
Demonstrate your curiosity about AI alignment and agentic behaviours during the interview. Ask insightful questions about their current projects and future directions. This shows you're genuinely interested in contributing to their mission and can think critically about the challenges they face.
✨Collaborative Mindset
Since this role involves working with engineers and safety specialists, highlight your teamwork skills. Share experiences where you've collaborated on research or projects, and how you’ve contributed to a positive team dynamic. They’ll want to see that you can work well in a collaborative environment.