At a Glance
- Tasks: Lead external safety testing for groundbreaking AI models at Google DeepMind.
- Company: Join Google DeepMind, a leader in AI innovation focused on public benefit and ethical practices.
- Benefits: Enjoy a collaborative work environment with opportunities for professional growth and impactful projects.
- Why this job: Be part of a mission-driven team shaping the future of AI safety and responsibility.
- Qualifications: Experience in fast-paced environments, strong communication skills, and project management expertise required.
- Other info: This is a 12-month fixed-term contract based in London.
The predicted salary is between 48000 - 72000 £ per year.
As the External Safety Testing Lead in the Responsible Development and Innovation (ReDI) team, you’ll be integral to the delivery and scaling of our external safety testing program on Google DeepMind’s (GDM’s) most groundbreaking models. You will work with teams across GDM, including Product Management, Research, Legal, Engineering, Public Policy, and Frontier Safety and Governance, to lead external safety evaluations which are a key part of our responsibility and safety best practices, helping Google DeepMind to progress towards its mission. The role is a 12 month fixed-term contract.
About us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
As the External Safety Testing Lead working in ReDI, you’ll be part of a team that partners with external, expert groups to conduct safety evaluations across various domains and modalities on our frontier models. In this role, you’ll work in collaboration with other members of this critical program, upholding our safety and responsibility commitments whilst responding to the evolving needs of the business.
The role Key responsibilities
- Lead the design and oversee the implementation of GDM’s external safety testing program, ensuring it meets our safety and responsibility requirements and external commitments.
- Lead GDM’s input into external safety testing requirements from regulators and government bodies.
- Input into public policy work to help shape potential future regulatory requirements and government policies related to AI safety.
- Lead implementation of external safety testing requirements from regulators and government bodies, working with multidisciplinary teams across Legal, Business and Corporate Development, and Engineering teams.
- Oversee efforts to optimise and scale the program to support the growing needs of the business.
- Identify and plan the program’s strategic resource requirements to execute the external safety testing program successfully, and to deliver against its priorities.
- Carry out cross-industry ‘horizon scanning’ to identify and maintain visibility of current and future external testing requirements from regulators, government bodies, and wider industry standards.
- Matrix manage a cross-functional team, aligning resources against business priorities and leading the escalation of risks and issues to wider stakeholder groups, including the Head of Evaluations, and Responsibility leadership.
Testing scope:
- Scope GDM’s external testing program, including the domains of frontier models to be tested.
- Engage with various stakeholders across Responsibility, modeling and SME teams to identify high-priority focus areas to build into testing plans and inform partnership approaches.
Partnerships:
- Own and manage relationships with various external testing partners across the partnership lifecycle.
- Oversee the identification of new partners with relevant skillsets to undertake external safety testing, working with relevant SMEs, to ensure it is aligned with high-priority focus areas.
Findings:
- Oversee the collation, assessment, and distribution of external safety testing findings, ensuring internal alignment on severity and escalation of high-severity findings.
Stakeholder engagement and communication:
- Build and lead a high-performing and collaborative multidisciplinary team to deliver the program.
- Oversee communication about the program to wider teams across GDM to increase visibility and buy-in.
- Oversee communication to relevant external stakeholders to influence industry standards and policy positions.
- Represent the external safety testing program in relevant internal and external forums.
Budget:
- Own a significant program budget, ensuring work is delivered within budget, working with program manager on forecasting spend and reconciliation.
About you
In order to set you up for success as an External Safety Testing Lead in the ReDI team, we look for the following skills and experience:
- Ability to shape, lead and deliver programs in a highly complex and live environment where decisions are made in a timely fashion.
- Ability to build and lead high-performing teams.
- Previous experience working in a fast-paced environment, either in a start-up, tech company, or consulting organisation.
- Familiarity with safety considerations of generative AI, including (but not limited to) frontier safety (such as chemical and biological risks), content safety, and sociotechnical risks (such as fairness).
- Strong communication skills and demonstrated ability to work in cross-functional teams, foster collaboration, and influence outcomes.
- Strong project management skills to work with the program manager to optimise existing processes and create new processes.
- Significant experience presenting and communicating complex concepts succinctly and clearly to different audiences.
In addition, the following would be an advantage:
- Experience of working with sensitive data and access controls.
- Prior experience working with product development or in similar agile settings would be advantageous.
- Subject matter expertise in generative AI safety considerations, including (but not limited to) frontier safety (such as chemical and biological risks), content safety, and sociotechnical risks (such as fairness).
- Experience designing and implementing audits or evaluations of cutting edge AI systems.
Application deadline: 6pm BST, 25th May 2025. Unfortunately, we cannot process applications made after the deadline.
External Safety Testing Lead - 12 Month Fixed Term Contract employer: Google DeepMind
Contact Detail:
Google DeepMind Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land External Safety Testing Lead - 12 Month Fixed Term Contract
✨Tip Number 1
Familiarise yourself with the latest trends and regulations in AI safety. This will not only help you understand the landscape but also demonstrate your commitment to the role during discussions with our team.
✨Tip Number 2
Network with professionals in the AI safety field. Engaging with experts can provide insights into best practices and may even lead to valuable connections that could support your application.
✨Tip Number 3
Prepare to discuss your experience in leading cross-functional teams. Highlight specific examples where you've successfully managed diverse groups to achieve a common goal, as this is crucial for the role.
✨Tip Number 4
Stay updated on Google DeepMind's projects and initiatives. Understanding our mission and recent developments will allow you to align your skills and experiences with our goals during the interview process.
We think you need these skills to ace External Safety Testing Lead - 12 Month Fixed Term Contract
Some tips for your application 🫡
Understand the Role: Before you start writing your application, make sure you fully understand the responsibilities and requirements of the External Safety Testing Lead position. Tailor your application to highlight how your skills and experiences align with the key responsibilities outlined in the job description.
Craft a Compelling CV: Your CV should clearly showcase your relevant experience, particularly in safety testing, project management, and cross-functional collaboration. Use bullet points to make it easy to read and ensure that you quantify your achievements where possible.
Write a Strong Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Highlight specific experiences that demonstrate your ability to lead complex programs and work with multidisciplinary teams. Make sure to mention your familiarity with generative AI safety considerations, as this is crucial for the role.
Proofread Your Application: Before submitting, take the time to proofread your application thoroughly. Check for any spelling or grammatical errors, and ensure that all information is accurate and up-to-date. A polished application reflects your attention to detail and professionalism.
How to prepare for a job interview at Google DeepMind
✨Understand the Role and Responsibilities
Make sure you have a clear understanding of the External Safety Testing Lead role. Familiarise yourself with the key responsibilities, such as leading safety evaluations and managing cross-functional teams. This will help you articulate how your experience aligns with their needs.
✨Showcase Your Project Management Skills
Prepare to discuss your project management experience, especially in fast-paced environments. Be ready to provide examples of how you've successfully led complex projects, optimised processes, and managed budgets, as these are crucial for this role.
✨Demonstrate Your Communication Abilities
Since the role involves significant stakeholder engagement, practice articulating complex concepts clearly and succinctly. Think of examples where you've influenced outcomes through effective communication, particularly in cross-functional settings.
✨Familiarise Yourself with AI Safety Considerations
Brush up on your knowledge of generative AI safety, including frontier safety and sociotechnical risks. Being able to discuss these topics intelligently will show that you're not only qualified but also genuinely interested in the field.