Red Teaming Lead, Responsibility
Red Teaming Lead, Responsibility

Red Teaming Lead, Responsibility

City of London Full-Time 120000 - 180000 £ / year (est.) No home office possible
T

At a Glance

  • Tasks: Lead red teaming efforts to identify risks in advanced AI models and ensure safety.
  • Company: Join Google DeepMind, a leader in AI innovation and responsibility.
  • Benefits: Competitive salary, bonuses, equity, and comprehensive benefits package.
  • Why this job: Make a real impact on AI safety while working with cutting-edge technology.
  • Qualifications: Experience in red teaming and strong understanding of AI risks required.
  • Other info: Dynamic team environment with opportunities for professional growth and collaboration.

The predicted salary is between 120000 - 180000 £ per year.

Snapshot

This role works with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content.

As Red Teaming Lead in Responsibility at Google DeepMind, you will be working with a diverse team to drive and grow red teaming of Google DeepMind\’s most groundbreaking models. You will be responsible for our frontier risk red teaming program, which probes for and identifies emerging model risks and vulnerabilities. You will pioneer the latest red teaming methods with teams across Google DeepMind and external partners to ensure that our work is conducted in line with responsibility and safety best practices, helping Google DeepMind to progress towards its mission.

About us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The role

As a Red Teaming Lead working in Responsibility, you\’ll be responsible for managing and growing our frontier risk red teaming program. You will be conducting hands-on red teaming of advanced AI models, partnering with external organizations on red teaming exercises, and working closely with product and engineering teams to develop the next generation of red teaming tooling. You\’ll be supporting the team across the full range of development, from running early tests to developing higher-level frameworks and reports to identify and mitigate risks.

Key responsibilities

  • Leading and managing the end-to-end responsibility & safety red teaming programme for Google DeepMind.
  • Designing and implementing expert red teaming of advanced AI models to identify risks, vulnerabilities, and failure modes across emerging risk areas such as CBRNe, Cyber and socioaffective behaviors.
  • Partnering with external red teamers and specialist groups to design and execute novel red teaming exercises.
  • Collaborating closely with product and engineering teams to design and develop innovative red teaming tooling and infrastructure.
  • Converting high-level risk questions into detailed testing plans, and implementing those plans, influencing others to support as necessary.
  • Working collaboratively alongside a team of multidisciplinary specialists to deliver on priority projects and incorporate diverse considerations into projects.
  • Communicating findings and recommendations to wider stakeholders across Google DeepMind and beyond.
  • Providing an expert perspective on AI risks, testing methodologies, and vulnerability analysis in diverse projects and contexts.

About you

In order to set you up for success in this role, we are looking for the following skills and experience:

  • Demonstrated experience running or managing red teaming or novel testing programs, particularly for AI systems.
  • A strong, comprehensive understanding of sociotechnical AI risks from recognized systemic risks to emergent risk areas.
  • A solid technical understanding of how modern AI models, particularly large language models, are built and operate.
  • Strong program management skills with a track record of successfully delivering complex, cross-functional projects.
  • Demonstrated ability to work within cross-functional teams, fostering collaboration, and influencing outcomes.
  • Ability to present complex technical findings to both technical and non-technical teams, including senior stakeholders.
  • Ability to thrive in a fast-paced environment, and an ability to pivot to support emerging needs.
  • Demonstrated ability to identify and clearly communicate challenges and limitations in testing approaches and analyses.

In addition, the following would be an advantage:

  • Direct, hands-on experience in safety evaluations and developing mitigations for advanced AI systems.
  • Experience with a range of experimentation and evaluation techniques, such as human study research, AI or product red-teaming, and content rating processes.
  • Experience working with product development or in similar agile settings.
  • Familiarity with sociotechnical and safety considerations of generative AI, including systemic risk domains identified in the EU AI Act (chemical, biological, radiological, and nuclear; cyber offense; loss of control; harmful manipulation).

The US base salary range for this full-time position is between $174,000 – $258,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

#J-18808-Ljbffr

Red Teaming Lead, Responsibility employer: The Rundown AI, Inc.

At Google DeepMind, we pride ourselves on fostering a collaborative and innovative work culture that empowers our employees to tackle some of the most pressing challenges in artificial intelligence. As a Red Teaming Lead, you will not only have the opportunity to work with cutting-edge technology but also benefit from a diverse team environment that prioritises safety and ethics, alongside ample opportunities for professional growth and development. Our commitment to employee well-being is reflected in our competitive compensation packages, inclusive policies, and a strong focus on work-life balance, making us an exceptional employer in the tech industry.
T

Contact Detail:

The Rundown AI, Inc. Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Red Teaming Lead, Responsibility

✨Tip Number 1

Network like a pro! Reach out to folks in the AI and red teaming space on LinkedIn or at industry events. A friendly chat can open doors that a CV just can't.

✨Tip Number 2

Show off your skills! If you’ve got a portfolio of past projects or case studies, bring them along to interviews. It’s a great way to demonstrate your hands-on experience and problem-solving abilities.

✨Tip Number 3

Prepare for the unexpected! In a fast-paced environment like Google DeepMind, be ready to tackle curveball questions. Think about how you’d approach real-world challenges in red teaming and be ready to discuss your thought process.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team.

We think you need these skills to ace Red Teaming Lead, Responsibility

Red Teaming
AI Systems Testing
Sociotechnical AI Risk Assessment
Program Management
Cross-Functional Collaboration
Technical Communication
Risk Identification
Vulnerability Analysis
Hands-on Experience in Safety Evaluations
Experimentation Techniques
Agile Methodologies
Understanding of Large Language Models
Development of Red Teaming Tooling
Stakeholder Engagement

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter for the Red Teaming Lead role. Highlight your experience with red teaming and AI systems, and show us how your skills align with our mission at Google DeepMind.

Showcase Your Technical Know-How: We want to see your understanding of modern AI models and sociotechnical risks. Use specific examples from your past work to demonstrate your expertise and how you’ve tackled similar challenges.

Be Clear and Concise: When writing your application, keep it straightforward. We appreciate clarity, so avoid jargon and make sure your key points stand out. This helps us quickly grasp your qualifications and fit for the role.

Apply Through Our Website: Don’t forget to submit your application through our official website! It’s the best way for us to receive your details and ensures you’re considered for the position. We can’t wait to hear from you!

How to prepare for a job interview at The Rundown AI, Inc.

✨Know Your Red Teaming Stuff

Make sure you brush up on the latest red teaming methods and tools. Familiarise yourself with the specific risks associated with AI models, especially in areas like CBRNe and cyber threats. Being able to discuss these topics confidently will show that you're not just knowledgeable but also passionate about the field.

✨Showcase Your Collaboration Skills

This role involves working closely with diverse teams, so be ready to share examples of how you've successfully collaborated in the past. Highlight your experience in cross-functional projects and how you influenced outcomes. This will demonstrate that you can thrive in a team-oriented environment.

✨Communicate Clearly

You’ll need to present complex findings to both technical and non-technical audiences. Practice explaining your previous work in simple terms, focusing on the impact and importance of your findings. This skill is crucial for ensuring everyone understands the risks and recommendations.

✨Be Ready to Pivot

The fast-paced nature of this role means you should be prepared to adapt quickly to new challenges. Think of examples from your past where you had to change direction or approach due to emerging needs. Showing that you can handle change will set you apart as a candidate who can thrive in dynamic environments.

Red Teaming Lead, Responsibility
The Rundown AI, Inc.

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

T
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>