Research Engineering Manager, Responsibility & Safety Evaluations, UK
Research Engineering Manager, Responsibility & Safety Evaluations, UK

Research Engineering Manager, Responsibility & Safety Evaluations, UK

London Full-Time 43200 - 72000 £ / year (est.) No home office possible
G

At a Glance

  • Tasks: Lead a team to develop safety evaluations for AI models, ensuring responsible releases.
  • Company: Join Google DeepMind, a leader in AI innovation focused on public benefit and ethical practices.
  • Benefits: Enjoy a diverse work environment, opportunities for growth, and collaboration with top experts.
  • Why this job: Make a real impact on AI safety while working in a dynamic and supportive culture.
  • Qualifications: Bachelor's degree in a technical field and experience leading engineering teams required.
  • Other info: Background check required upon successful application.

The predicted salary is between 43200 - 72000 £ per year.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

This role is for an engineering manager working on responsibility and safety assurance evaluations at Google DeepMind. These are the evaluations which allow decision-makers to ensure that our model releases are safe and responsible. The role involves leading an engineering team in developing and maintaining these evaluations and the infrastructure that supports them.

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

In this role, you will be responsible for leading and building a high-performing team of engineers with a diverse expertise and skill set, to deliver on responsibility and safety assurance evaluations while maintaining healthy and positive team dynamics. You will provide technical and strategic guidance to the team, encouraging innovation and ensuring successful project delivery. Partnering and collaborating with related engineering teams, product & program management teams will form a critical part of the role in order to deliver solutions in a timely manner as well as developing a longer term roadmap.

Key responsibilities
  • Lead and grow a research engineering team designing, building and executing safety evaluations of AI models, across risk areas including child safety, hate speech, harassment, representational harms, misinformation, and chemical, biological, radiological and nuclear risks.
  • Work in close partnership with the Responsible Development & Innovation (ReDI) team on prioritisation, roadmap and strategy to ensure evaluations effectively meet needs of decision-makers.
  • Execute and deliver on the roadmap, including overseeing the design and development evaluations to test the safety of cutting edge AI models.
  • Develop and maintain infrastructure for these evaluations.
  • Manage the running evaluations prior to releases for new AI models, and where appropriate, automation of these.
  • Clearly communicate progress and outcomes of evaluations work.
  • Collaborate with engineering groups across Google DeepMind and experts in various fields of AI ethics, policy and safety, and develop new cross-team collaborations for project delivery.

In order to set you up for success as a Research Engineering Manager at Google DeepMind, we look for the following skills and experience:

  • Bachelor's degree or greater in a technical subject (e.g. machine learning, AI, computer science, mathematics, physics, statistics), or equivalent experience.
  • Experience leading and developing engineering teams delivering on work with tight deadlines, high levels of change and uncertainty.
  • Experience in working with machine learning or high performance computing at scale.
  • Strong knowledge and experience of Python.
  • Experience with deployment in production environments.
  • Experience working with researchers and engineers in research domains.
  • Knowledge of mathematics, statistics and machine learning concepts useful for understanding research papers in the field.
  • Ability to present technical concepts and statistical results clearly to a range of audiences.
  • A deep interest in the responsibility and safety of AI systems, and in AI policy.

In addition, the following would be an advantage:

  • Experience designing and building evaluations for AI models.
  • Expertise in the ethics and safety of AI systems.
  • Experience with crowd computing (e.g. designing experiments, working with human raters).
  • Experience with data analysis tools & libraries.
  • Experience with web application development and user experience design.

Research Engineering Manager, Responsibility & Safety Evaluations, UK employer: Google DeepMind

At Google DeepMind, we pride ourselves on fostering a diverse and inclusive work environment that champions innovation and collaboration. As a Research Engineering Manager in London, you will not only lead a talented team dedicated to ensuring the safety and responsibility of AI technologies but also benefit from our commitment to employee growth through continuous learning opportunities and a supportive culture. With access to cutting-edge resources and a focus on impactful projects, you'll find a rewarding career path that aligns with your passion for ethical AI development.
G

Contact Detail:

Google DeepMind Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Research Engineering Manager, Responsibility & Safety Evaluations, UK

✨Tip Number 1

Familiarise yourself with the latest trends and challenges in AI safety and ethics. This will not only help you understand the role better but also allow you to engage in meaningful conversations during interviews, showcasing your passion for the field.

✨Tip Number 2

Network with professionals already working in AI safety and responsibility evaluations. Attend relevant conferences or webinars, and connect with them on platforms like LinkedIn. This can provide you with insider knowledge and potentially lead to referrals.

✨Tip Number 3

Prepare to discuss your leadership style and experiences in managing engineering teams. Be ready to share specific examples of how you've successfully navigated tight deadlines and uncertainty, as this is crucial for the role.

✨Tip Number 4

Showcase your technical skills by being prepared to discuss your experience with Python and machine learning concepts. Consider brushing up on relevant projects you've worked on, as practical examples can significantly strengthen your candidacy.

We think you need these skills to ace Research Engineering Manager, Responsibility & Safety Evaluations, UK

Leadership Skills
Team Management
Technical Guidance
Project Delivery
Collaboration Skills
Machine Learning Expertise
High Performance Computing
Python Programming
Production Deployment
Statistical Analysis
AI Ethics Knowledge
Communication Skills
Data Analysis Tools
Web Application Development
User Experience Design

Some tips for your application 🫡

Tailor Your CV: Make sure your CV highlights relevant experience in leading engineering teams, particularly in the context of AI and safety evaluations. Use specific examples that demonstrate your technical skills and leadership abilities.

Craft a Compelling Cover Letter: In your cover letter, express your passion for AI safety and responsibility. Discuss how your background aligns with the role's requirements and mention any specific projects or experiences that showcase your expertise in this area.

Showcase Technical Skills: Clearly outline your technical skills, especially in Python and machine learning. Provide examples of how you've applied these skills in previous roles, particularly in high-pressure environments or with tight deadlines.

Highlight Collaboration Experience: Emphasise your experience working collaboratively with diverse teams. Mention any cross-team projects you've been involved in, especially those related to AI ethics or safety, to demonstrate your ability to partner effectively with others.

How to prepare for a job interview at Google DeepMind

✨Showcase Your Leadership Skills

As a Research Engineering Manager, you'll be leading a team. Be prepared to discuss your previous experiences in managing teams, how you foster collaboration, and how you handle conflicts. Use specific examples to illustrate your leadership style and the positive outcomes of your management.

✨Demonstrate Technical Expertise

Make sure to highlight your technical background, especially in machine learning and Python. Be ready to discuss your experience with high-performance computing and any relevant projects you've worked on. This will show that you have the necessary skills to guide your team effectively.

✨Understand AI Ethics and Safety

Given the focus on responsibility and safety evaluations, it's crucial to demonstrate your understanding of AI ethics. Prepare to discuss current challenges in AI safety and how you would approach them. This shows your commitment to the values of Google DeepMind and your readiness to contribute meaningfully.

✨Prepare for Collaboration Questions

Collaboration is key in this role. Be ready to talk about how you've successfully partnered with other teams in the past. Think of examples where you worked cross-functionally to achieve project goals, as this will highlight your ability to work well within the broader organisation.

Research Engineering Manager, Responsibility & Safety Evaluations, UK
Google DeepMind
G
  • Research Engineering Manager, Responsibility & Safety Evaluations, UK

    London
    Full-Time
    43200 - 72000 £ / year (est.)

    Application deadline: 2027-05-26

  • G

    Google DeepMind

Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>