At a Glance
- Tasks: Conduct advanced AI research and develop safety mechanisms for next-gen AI systems.
- Company: One of the UK's largest banks focused on responsible AI development.
- Benefits: Competitive day rate, potential for permanent role, mentorship, and access to advanced tools.
- Other info: Collaborative environment with opportunities for publication and strategic influence.
- Why this job: Join a pioneering team shaping the future of ethical AI with real-world impact.
- Qualifications: Strong research background in AI, familiarity with Python, and enthusiasm for AI safety.
The predicted salary is between 43200 - 72000 € per year.
Day Rate Contract - Option To Convert To Permanent In The Future. Join one of the UK's largest banks building next-generation AI capabilities with a strong commitment to safe, explainable, and trusted AI. This team is developing cutting-edge guardrail technologies to ensure AI systems behave reliably across text, voice, and emerging multimodal modalities.
This role is ideal for a curious, high-end thinker (such as a recent Master’s or PhD graduate) with a passion for responsible AI, agentic systems, and the scientific foundations behind guardrail effectiveness. You will work at the intersection of research, model development, and deep validation, contributing to safety frameworks that shape the organisation’s AI strategy.
What You’ll Do
- Research & Explore
- Conduct advanced research into AI guardrails, agentic behaviours, and safe model-interaction patterns.
- Explore state-of-the-art methods across LLMs, multimodal models, and emerging agent systems.
- Investigate niche areas of AI safety such as unintended behaviours, boundary testing, and robustness.
- Build & Experiment
- Develop prototype models, safety mechanisms, and evaluation tools.
- Build and refine guardrail mechanisms that operate across these modalities.
- Experiment with multimodal inputs including: Text, Voice, Video.
- Deep Testing & Validation
- Design and run high-depth validation experiments to confirm guardrail effectiveness.
- Stress-test models for security, misuse, red-teaming scenarios, and failure boundaries.
- Support development of automated testing frameworks for AI controls.
- Contribute to Responsible AI Strategy
- Help validate controls ensuring AI systems meet internal responsible AI standards.
- Collaborate with engineers, safety specialists, and governance teams.
- Produce high-quality research insights to guide product and platform direction.
What We’re Looking For
- Strong research credentials (PhD, MPhil, MSc, or equivalent research experience).
- Familiarity with Python-based research frameworks.
- Strong foundational knowledge in machine learning, foundation models, or multimodal AI.
- Enthusiasm for AI safety, guardrails, and responsible-AI frameworks.
- Experience building or fine-tuning models (open-source or proprietary).
- Ability to design experiments, measure model behaviour, and interpret results.
- Curiosity about AI alignment, agentic behaviour, and interpretability.
- Exposure to LLM or multimodal model evaluation.
Nice to have:
- Experience working with synthetic data, evaluation sets, or adversarial testing.
- Interest in governance, risk, or AI assurance.
Why Join?
This is a rare opportunity to work on advanced AI research within a major organisation deploying AI at enterprise scale. You’ll join a growing research capability, exploring cutting-edge topics while ensuring AI is developed ethically, responsibly, and with world-class guardrails.
You’ll benefit from:
- Access to advanced tools and emerging models.
- Opportunities to publish internal research and influence strategic direction.
- Mentorship from experienced AI and safety specialists.
- A collaborative environment that values experimentation and novel thinking.
Artificial Intelligence Researcher in City of London employer: Caspian One
Join one of the UK's largest banks as an Artificial Intelligence Researcher, where you will be at the forefront of developing next-generation AI capabilities with a strong emphasis on safety and responsibility. The company fosters a collaborative work culture that encourages innovation and experimentation, providing access to advanced tools and mentorship from seasoned professionals in the field. With opportunities for professional growth and the chance to influence strategic direction through internal research publications, this role offers a meaningful and rewarding career path in a dynamic environment.
StudySmarter Expert Advice🤫
We think this is how you could land Artificial Intelligence Researcher in City of London
✨Tip Number 1
Network like a pro! Reach out to people in the AI field, attend meetups, and connect on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your research projects, experiments, and any models you've built. This will give potential employers a taste of what you can bring to the table.
✨Tip Number 3
Prepare for interviews by brushing up on your knowledge of AI safety and guardrails. Be ready to discuss your thoughts on responsible AI and how you can contribute to their strategy.
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search.
We think you need these skills to ace Artificial Intelligence Researcher in City of London
Some tips for your application 🫡
Show Your Passion for AI Safety:Make sure to highlight your enthusiasm for responsible AI and safety frameworks in your application. We want to see that you’re not just knowledgeable, but genuinely excited about the impact of AI on society.
Tailor Your CV and Cover Letter:Don’t just send a generic CV! Tailor it to reflect the skills and experiences that align with our job description. We love seeing how your background fits into our mission of developing safe and explainable AI.
Highlight Your Research Experience:Since we’re looking for strong research credentials, make sure to detail any relevant projects or studies you've worked on. We want to know how your experience can contribute to our cutting-edge guardrail technologies.
Apply Through Our Website:We encourage you to apply directly through our website. It’s the best way to ensure your application gets the attention it deserves. Plus, it shows us you’re serious about joining our team!
How to prepare for a job interview at Caspian One
✨Know Your AI Stuff
Make sure you brush up on the latest trends in AI, especially around guardrails and safety mechanisms. Be ready to discuss your research experience and how it relates to the role. This shows you're not just a candidate, but someone genuinely passionate about responsible AI.
✨Prepare for Technical Questions
Expect some deep dives into your technical skills, particularly with Python and machine learning frameworks. Practise explaining complex concepts in simple terms, as this will demonstrate your understanding and ability to communicate effectively with non-technical team members.
✨Show Your Curiosity
This role is all about exploration and innovation, so be prepared to share examples of how you've approached problems creatively in the past. Discuss any niche areas of AI safety you've investigated, and don't hesitate to ask insightful questions about the company's current projects.
✨Collaborative Mindset
Since you'll be working closely with engineers and safety specialists, highlight your teamwork experiences. Share examples of successful collaborations and how you’ve contributed to a positive team environment. This will show that you’re not just a lone wolf but someone who thrives in a collaborative setting.