At a Glance
- Tasks: Lead cutting-edge AI safety research and mentor a dynamic team of researchers.
- Company: Join Faculty, a leader in responsible AI innovation with a collaborative culture.
- Benefits: Enjoy unlimited annual leave, private healthcare, and flexible working options.
- Why this job: Make a real impact on the future of safe AI systems and shape technology's legacy.
- Qualifications: Proven track record in AI research and deep knowledge of language models.
- Other info: Diverse and inclusive environment with excellent career growth opportunities.
The predicted salary is between 72000 - 108000 ÂŁ per year.
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then we've worked with over 350 global customers to transform their performance through human‑centric AI. We don’t chase hype cycles. We innovate, build and deploy responsible AI that moves the needle—and we know a thing or two about doing it well. Our depth of technical product and delivery expertise serves clients across government, finance, retail, energy, life sciences and defence.
Our business and reputation are growing fast, and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch‑defining technology—join a company where you’ll be empowered to envision its most powerful applications and to make them happen.
About the Team
Faculty conducts critical red‑teaming and builds evaluations for misuse capabilities in sensitive areas such as CBRN, cybersecurity and international security for several leading frontier model developers and national safety institutes. Our work has been featured in OpenAI’s system card for O1. We also conduct fundamental technical research on mitigation strategies, publishing findings in peer‑reviewed conferences and delivering to national security institutes. Complementing this, we design evaluations for model developers across broader safety‑relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.
About the Role
The Principal Research Scientist for AI Safety will be the driving force behind Faculty’s small, high‑agency research team shaping the future of safe AI systems. You will lead the scientific research agenda for AI safety focusing on large language models and other critical systems. This role involves leading researchers, driving external publications and ensuring alignment with Faculty’s commercial ambition to build trustworthy AI, giving you the opportunity to make a high‑impact contribution in a rapidly evolving critical field.
What you’ll be doing
- Lead the AI safety team’s ambitious research agenda, setting priorities aligned with long‑term company goals.
- Conduct and oversee cutting‑edge AI safety research specifically for large language models and safety‑critical AI systems.
- Publish high‑impact research findings in leading academic conferences and journals.
- Shape the research agenda by identifying impactful opportunities and balancing scientific and practical priorities.
- Help build and mentor a growing team of researchers, fostering an innovative and collaborative culture.
- Collaborate on delivery of evaluations and red‑teaming projects in high‑risk domains like CBRN and cybersecurity.
- Position Faculty as a thought leader in AI safety through research and strategic stakeholder engagement.
Who we’re looking for
- You have a proven track record of high‑impact AI research demonstrated through top‑tier academic publications or equivalent experience.
- You possess deep domain knowledge in language models and the evolving field of AI safety.
- You exhibit strong research judgment and extensive experience in AI safety, including generating and executing novel research directions.
- You have the ability to conduct and oversee complex technical research projects with advanced programming skills (Python, standard data‑science stack) to review team work.
- You bring excellent verbal and written communication skills capable of sharing complex ideas with diverse audiences.
- You have a deep understanding of the AI safety research landscape and the ability to build connections to secure resources for impactful work.
Our Interview Process
- Talent Team Screen (30 mins)
- Experience & Theory interview (45 mins)
- Research presentation and coding interview (75 mins)
- Leadership and Principles interview (60 mins)
- Final stage with our CEO (45 mins)
Our Recruitment Ethos
We aim to grow the best team—not the most similar one. We know that diversity of individuals fosters diversity of thought and strengthens our principle of seeking truth. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.
Some of our standout benefits
- Unlimited Annual Leave Policy
- Private healthcare and dental
- Enhanced parental leave
- Family‑Friendly Flexibility & Flexible working
- Sanctus Coaching
- Hybrid Working (2 days in our Old Street office, London)
Principal Research Scientist AI Safety in London employer: Faculty AI
Contact Detail:
Faculty AI Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Principal Research Scientist AI Safety in London
✨Tip Number 1
Network like a pro! Reach out to folks in the AI safety field on LinkedIn or at conferences. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Show off your expertise! Prepare a portfolio of your research and projects related to AI safety. When you get the chance, share it during interviews to demonstrate your impact.
✨Tip Number 3
Practice makes perfect! Mock interviews with friends or mentors can help you nail those tricky questions. Plus, it’s a great way to boost your confidence before the real deal.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. We’re excited to see what you bring to the table!
We think you need these skills to ace Principal Research Scientist AI Safety in London
Some tips for your application 🫡
Show Your Passion for AI Safety: When writing your application, let your enthusiasm for AI safety shine through! We want to see how your interests align with our mission to build responsible AI. Share any relevant projects or research that highlight your commitment to this critical field.
Tailor Your CV and Cover Letter: Make sure to customise your CV and cover letter for the Principal Research Scientist role. Highlight your experience with large language models and AI safety, and don’t forget to mention any high-impact publications. We love seeing how your background fits with our goals!
Be Clear and Concise: In your written application, clarity is key! Use straightforward language to explain your research experience and technical skills. We appreciate well-structured applications that make it easy for us to understand your qualifications and potential contributions.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team at Faculty!
How to prepare for a job interview at Faculty AI
✨Know Your AI Safety Stuff
Make sure you brush up on the latest trends and research in AI safety, especially around large language models. Familiarise yourself with recent publications and be ready to discuss how your work aligns with Faculty's mission to build trustworthy AI.
✨Showcase Your Leadership Skills
Since this role involves leading a team, prepare examples of how you've successfully managed research projects or mentored others. Highlight your ability to foster collaboration and innovation within a team setting.
✨Prepare for Technical Questions
Expect to dive deep into technical discussions during the coding interview. Be ready to demonstrate your programming skills in Python and your understanding of data science concepts. Practising coding problems related to AI safety can give you an edge.
✨Communicate Complex Ideas Clearly
You’ll need to convey intricate concepts to diverse audiences. Practice explaining your research in simple terms, as well as preparing for questions that may challenge your ideas. Clear communication is key to making a strong impression.