At a Glance
- Tasks: Lead AI safety initiatives and shape global standards for responsible AI.
- Company: Join Faculty, a leading applied AI company making a real-world impact.
- Benefits: Enjoy unlimited annual leave, private healthcare, and flexible working options.
- Other info: Diverse and inclusive workplace with a commitment to positive impact.
- Why this job: Be at the forefront of AI safety and influence its future on a global scale.
- Qualifications: Proven experience in AI safety research and leading high-performing teams.
The predicted salary is between 72000 - 108000 ÂŁ per year.
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we've worked with over 350 global customers to transform their performance through humanâcentric AI. We donât chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.
Our business, and reputation, is growing fast and weâre always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch-defining technology, join a company where youâll be empowered to envision its most powerful applications, and to make them happen.
Facultyâs Research team conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1. Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peerâreviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safetyârelevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.
This is a brand new senior leadership role to provide technical leadership of Faculty's work on AI safety for the Foundation Labs - and presents a unique opportunity to shape how AI safety is done globally. Faculty is one of the worldâs leading applied AI companies, helping many of the organisations that shape our world to adopt AI successfully and safely. We play an important role in the emerging AI safety ecosystem. We already have many of the key Frontier Labs as clients, including Open AI and Anthropic, for whom we provide thirdâparty red teaming, technical testing and other AI safety services. And we work with the UK government and other international governments on AI safety, including helping set up the AI Security Institute and delivering technical work which catalysed the first global AI Safety Summit at Bletchley Park in 2023.
With the recent announcement of Faculty's acquisition by Accenture, we are investing to take our work on AI safety to global scale, and this role will be key to shaping that. This will include:
- The opportunity to hire and build a worldâclass AI safety technical team - of calibre unmatched outside of the Labs themselves
- The opportunity to design and lead an AI safety R&D programme - creating the advances which will enable AI safety at scale to keep pace with model advances
- The opportunity to build our work with the Frontier Labs to scale - helping to test and assure new frontier models ahead of public release
- The opportunity to contribute to and shape the international debate on AI safety, including with governments and other key bodies, working closely with Marc Warner Faculty's founder & CEO.
This role will suit someone with a deep passion and commitment to AI safety, and represents a unique opportunity to contribute to this agenda globally.
What youâll be doing:
- Owning the technical strategy for AI Safety by determining research directions and building technologies that mitigate risks from alignment to societal harms.
- Leading a highâperforming R&D team through intentional hiring, mentorship, and the cultivation of a culture defined by technical excellence and high output.
- Driving academic impact by guiding complex machine learning projects and securing topâtier publications that cement Faculty's reputation in the safety domain.
- Shaping marketâleading offerings for frontier labs and security institutes, translating cuttingâedge R&D into practical, groundbreaking safety solutions.
- Overseeing technical delivery of AI safety and security projects, ensuring scientific rigor and highâquality outputs across evaluations and redâteaming.
- Representing Faculty externally as a primary technical voice, delivering influential thought leadership and speaking at major global industry events.
- Collaborating crossâfunctionally with business unit directors and commercial teams to align research investment with strategic growth and client needs.
Who weâre looking for:
- You have a proven track record of designing and leading highâperforming technical teams, with the ability to manage R&D budgets and mentor senior technical staff.
- You bring deep expertise in AI safety research, specifically regarding alignment, interpretability, and robustness in large language models (LLMs) or safetyâcritical systems.
- You possess a strong scientific background evidenced by highâimpact machine learning publications and a comprehensive understanding of transformer architectures.
- You are a strategic visionary capable of setting research priorities that align with longâterm organisational goals while remaining at the cutting edge of field developments.
- You are a compelling communicator who can synthesise complex technical concepts into narratives that influence both Câsuite executives and the broader research community.
- You exhibit strong commercial acumen and stakeholder management skills, allowing you to navigate complex organisations and accelerate the delivery of highâvalue projects.
We are open to applications from people of all backgrounds, ethnicities, genders, religions, and sexual orientations. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.
Interview Process:
- Talent Team Screen (45 mins)
- Principles and Experience interview (60 mins)
- Research Proposal (90 mins)
- Leadership Interview (60 mins)
- Meet with CEO (30 mins)
Our Recruitment Ethos: We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. Weâre united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.
Some of our standout benefits:
- Unlimited Annual Leave Policy
- Private healthcare and dental
- Enhanced parental leave
- FamilyâFriendly Flexibility & Flexible working
- Sanctus Coaching
- Hybrid Working
If you donât feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please donât hesitate in applying as you might be right for this role, or other roles. We are open to conversations about partâtime hours.
Technical Director of AI Safety in London employer: Faculty
Contact Detail:
Faculty Recruiting Team
StudySmarter Expert Advice đ¤Ť
We think this is how you could land Technical Director of AI Safety in London
â¨Tip Number 1
Network like a pro! Reach out to people in the AI safety field, especially those connected to Faculty. Attend industry events and engage in discussions online. You never know who might help you land that dream role!
â¨Tip Number 2
Show off your expertise! Prepare to discuss your past projects and how they relate to AI safety. Be ready to share insights on alignment and robustness in LLMs. This will demonstrate your passion and knowledge during interviews.
â¨Tip Number 3
Practice makes perfect! Conduct mock interviews with friends or mentors. Focus on articulating complex technical concepts clearly. This will help you communicate effectively with both technical teams and C-suite executives.
â¨Tip Number 4
Apply through our website! Itâs the best way to ensure your application gets noticed. Plus, it shows your enthusiasm for joining Faculty and being part of our mission to innovate responsible AI.
We think you need these skills to ace Technical Director of AI Safety in London
Some tips for your application đŤĄ
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your experience in AI safety and leadership. We want to see how your skills align with our mission at Faculty, so donât hold back on showcasing your relevant achievements!
Show Your Passion: Let your enthusiasm for AI safety shine through! Share your thoughts on the current landscape of AI and how you envision contributing to its safe development. We love candidates who are genuinely excited about making a positive impact.
Be Clear and Concise: When writing your application, keep it straightforward and to the point. Use clear language to explain your experiences and ideas. We appreciate well-structured applications that make it easy for us to see your potential.
Apply Through Our Website: We encourage you to submit your application directly through our website. Itâs the best way for us to receive your details and ensures youâre considered for the role. Plus, it shows youâre keen to join our team!
How to prepare for a job interview at Faculty
â¨Know Your AI Safety Stuff
Make sure you brush up on the latest trends and research in AI safety, especially around alignment and robustness in large language models. Being able to discuss recent publications or breakthroughs will show your passion and expertise.
â¨Showcase Your Leadership Skills
Prepare examples of how you've successfully led technical teams in the past. Highlight your experience in mentoring and building high-performing teams, as this role is all about shaping a world-class AI safety team.
â¨Communicate Clearly
Practice explaining complex technical concepts in simple terms. Youâll need to convey your ideas to both technical and non-technical stakeholders, so being a compelling communicator is key.
â¨Align with Faculty's Vision
Familiarise yourself with Faculty's mission and recent projects. Be ready to discuss how your vision for AI safety aligns with their goals and how you can contribute to their ongoing success in the field.