At a Glance
- Tasks: Lead cutting-edge AI safety research and mentor a dynamic team.
- Company: Join Faculty, a leader in responsible AI innovation since 2014.
- Benefits: Enjoy unlimited leave, private healthcare, and flexible working options.
- Why this job: Shape the future of AI safety and make a real-world impact.
- Qualifications: Proven AI research experience and strong leadership skills required.
- Other info: Diverse and inclusive culture with excellent career growth opportunities.
The predicted salary is between 43200 - 72000 ÂŁ per year.
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we've worked with over 350 global customers to transform their performance through humanâcentric AI. We donât chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence. Our business, and reputation, is growing fast and weâre always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.
AI is an epochâdefining technology, join a company where youâll be empowered to envision its most powerful applications, and to make them happen.
About the team: Faculty conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1. Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peerâreviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safetyârelevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.
About the role: The Head of Research of Development for AI Safety will head up Faculty's small, highâagency research team, shaping the future of safe AI systems. You will lead the scientific research agenda for AI safety, focusing on large language models and other critical systems. This role involves leading researchers, driving external publications, and ensuring alignment with Faculty's commercial ambition to build trustworthy AI, giving you the opportunity to make a highâimpact contribution in a rapidly evolving, critical field.
What youâll be doing:
- Leading the AI safety teamâs ambitious research agenda, setting priorities aligned with longâterm company goals.
- Conducting and overseeing cuttingâedge AI safety research, specifically for large language models and safetyâcritical AI systems.
- Publishing highâimpact research findings in leading academic conferences and journals.
- Shaping the research agenda by identifying impactful opportunities and balancing scientific and practical priorities.
- Building, managing, and mentoring a growing team of researchers, fostering an innovative and collaborative culture.
- Collaborating on delivery of evaluations and redâteaming projects in highârisk domains like CBRN and cybersecurity.
- Positioning Faculty as a thought leader in AI safety through research and strategic stakeholder engagement.
Who weâre looking for:
- You have a proven track record of highâimpact AI research, demonstrated through topâtier academic publications or equivalent experience.
- You possess deep domain knowledge in language models and the evolving field of AI safety.
- You exhibit strong research judgment and extensive experience in AI safety, including generating and executing novel research directions.
- You have the ability to conduct and oversee complex technical research projects, with advanced programming skills (Python, standard data science stack) to review team work.
- You are a passionate leader who adopts a caring attitude towards the personal and professional development of technical teams.
- You bring excellent verbal and written communication skills, capable of sharing complex ideas with diverse audiences.
- You have a deep understanding of the AI safety research landscape and the ability to build connections to secure resources for impactful work.
Our Interview Process: Talent Team Screen (30 mins), Experience & Theory interview (45 mins), Research presentation and coding interview (75 mins), Leadership and Principles interview (60 mins), Final stage with our CEO (45 mins).
Our Recruitment Ethos: We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. Weâre united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.
Some of our standout benefits: Unlimited Annual Leave Policy, Private healthcare and dental, Enhanced parental leave, FamilyâFriendly Flexibility & Flexible working, Sanctus Coaching, Hybrid Working (2 days in our Old Street office, London).
If you donât feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please do apply or reach out to our Talent Acquisition team for a confidential chat - talent@faculty.ai. Please know we are open to conversations about partâtime roles or condensed hours.
Head of Research and Development - AI Safety in London employer: The Rundown AI, Inc.
Contact Detail:
The Rundown AI, Inc. Recruiting Team
StudySmarter Expert Advice đ¤Ť
We think this is how you could land Head of Research and Development - AI Safety in London
â¨Tip Number 1
Network like a pro! Reach out to people in the AI safety field, attend relevant events, and connect with Faculty employees on LinkedIn. Building relationships can open doors that applications alone can't.
â¨Tip Number 2
Prepare for your interviews by diving deep into Faculty's work and values. Understand their approach to responsible AI and be ready to discuss how your experience aligns with their mission. Show them youâre not just another candidate!
â¨Tip Number 3
Donât underestimate the power of a strong personal brand. Share your insights on AI safety through social media or blogs. This not only showcases your expertise but also positions you as a thought leader in the field.
â¨Tip Number 4
Apply directly through our website! Itâs the best way to ensure your application gets seen. Plus, it shows youâre genuinely interested in being part of the Faculty team. Donât miss out on this opportunity!
We think you need these skills to ace Head of Research and Development - AI Safety in London
Some tips for your application đŤĄ
Show Your Passion for AI Safety: When writing your application, let your enthusiasm for AI safety shine through! Share specific examples of your past work or research that align with our mission at Faculty. We want to see your genuine interest in making a positive impact in this critical field.
Tailor Your CV and Cover Letter: Make sure to customise your CV and cover letter for the Head of Research and Development role. Highlight your experience with large language models and any relevant publications. We love seeing how your background fits with our goals, so donât hold back!
Be Clear and Concise: While we appreciate detail, clarity is key! Use straightforward language and structure your application well. This will help us quickly grasp your qualifications and understand your thought process, which is super important for a role like this.
Apply Through Our Website: We encourage you to apply directly through our website. Itâs the best way to ensure your application gets into the right hands. Plus, it shows us youâre serious about joining our team at Faculty!
How to prepare for a job interview at The Rundown AI, Inc.
â¨Know Your AI Safety Stuff
Make sure you brush up on the latest trends and research in AI safety, especially around large language models. Familiarise yourself with recent publications and be ready to discuss how they relate to Faculty's work.
â¨Showcase Your Leadership Skills
As a Head of Research, you'll need to demonstrate your ability to lead and mentor a team. Prepare examples of how you've successfully managed teams in the past, focusing on fostering innovation and collaboration.
â¨Prepare for Technical Questions
Expect to dive deep into technical discussions during the coding interview. Be ready to showcase your programming skills, particularly in Python, and discuss complex research projects you've overseen.
â¨Communicate Clearly and Confidently
You'll need to convey complex ideas to diverse audiences. Practice explaining your research and its implications in simple terms, as well as preparing for questions about your communication style and approach to stakeholder engagement.