Research Engineer, Frontier Safety Risk Assessment London, UK; New York City, New York, US; San[...]
Research Engineer, Frontier Safety Risk Assessment London, UK; New York City, New York, US; San[...]

Research Engineer, Frontier Safety Risk Assessment London, UK; New York City, New York, US; San[...]

Full-Time 100000 - 150000 ÂŁ / year (est.) No home office possible
Go Premium
D

At a Glance

  • Tasks: Join us to assess and manage risks from cutting-edge AI systems.
  • Company: Be part of Google DeepMind, a leader in AI innovation.
  • Benefits: Enjoy competitive salary, flexible working, and comprehensive health benefits.
  • Why this job: Make a real impact on the future of AI safety and ethics.
  • Qualifications: Strong background in deep learning and Python programming required.
  • Other info: Collaborative environment with opportunities for personal and professional growth.

The predicted salary is between 100000 - 150000 ÂŁ per year.

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

Our team identifies, assesses, and mitigates potential catastrophic risks from current and future AI systems. As a member of technical staff, you will design, implement, and empirically validate approaches to assessing and managing catastrophic risk from current and future frontier AI systems. At the moment, these risks range from loss of control of advanced AI systems or automated ML R&D through misuse of AI for widespread CBRN or cyber harm.

The Risk Assessment team measures and assesses the possible risks posed by frontier systems, making sure that GDM knows the capabilities and propensities of frontier models so that adequate mitigations are in place. We also make sure that the mitigations do enough to manage the risks. But the risks posed by frontier systems are, themselves, unclear. Forecasting the possible risk pathways is challenging, as is designing and implementing sensors that could reliably detect emerging risks before we actually have real-world examples.

We focus on building decision-relevant and trustworthy evaluation systems that prioritise compute and effort on risk measurements with the highest value of information. We then need to be able to assess the extent to which proposed and implemented mitigations actually cover the identified risks, and to measure how successfully they generalise to novel settings. The Risk Assessment team is part of Frontier Safety which is responsible for measuring and managing severe potential risks from current and next-generation Frontier models. Our approach is one of adaptively scaling risk assessment and mitigation processes to handle the near-future.

We are part of GDM’s AGI Safety and Alignment Team, whose other members focus on research aimed at enabling systems further in the future to be aligned and safe. These include interpretability, scalable oversight, control, and incentives.

We are seeking 2 Research Engineers for the Frontier Safety Risk Assessment team within the AGI Safety and Alignment Team. In this role, you will contribute novel research towards our ability to measure and assess risk from frontier models. This might include:

  • Identifying new risk pathways within current areas (loss of control, ML R&D, cyber, CBRN, harmful manipulation) or in new ones;
  • Conceiving of, designing, and developing new ways to measure pre-mitigation and post-mitigation risk;
  • Forecasting and scenario planning for future risks which are not yet material.

Your work will involve complex conceptual thinking as well as engineering. You should be comfortable with research that is uncertain, under-constrained, and which does not have an achievable “right answer”. You should also be skilled at engineering, especially using Python, and able to rapidly familiarise yourself with internal and external codebases. Lastly, you should be able to adapt to pragmatic constraints around compute and researcher time that require us to prioritise effort based on the value of information.

Although this job description is written for a Research Engineer, all members of this team are better thought of as members of technical staff. We expect everyone to contribute to the research as well as the engineering and to be strong in both areas. The role will mostly depend on your general ability to assess and manage future risks, rather than from specialist knowledge within the risk domains, but insofar as specialist knowledge is helpful, knowledge in ML R&D and loss of control as risk domains are likely the most valuable.

In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:

  • You have extensive research experience with deep learning and/or foundation models (for example, but not necessarily, a PhD in machine learning).
  • You are adept at generating ideas and designing experiments, and implementing these in Python with real AI systems.
  • You are keen to address risks from foundation models, and have thought about how to do so.
  • You plan for your research to impact production systems on a timescale between “immediately” and “a few years”.
  • You are excited to work with strong contributors to make progress towards a shared ambitious goal.
  • With strong, clear communication skills, you are confident engaging technical stakeholders to share research insights tailored to their background.

In addition, any of the following would be an advantage:

  • Experience in areas such as frontier risk assessment and/or mitigations, safety, and alignment.
  • Engineering experience with LLM training and inference.
  • PhD in Computer Science or Machine Learning related field.
  • A track record of publications at venues such as NeurIPS, ICLR, ICML, RL/DL, EMNLP, AAAI and UAI.
  • Experience with collaborating or leading an applied research project.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law.

If you have a disability or additional need that requires accommodation, please do not hesitate to let us know. At Google DeepMind, we want employees and their families to live happier and healthier lives, both in and out of work, and our benefits reflect that. Some select benefits we offer: enhanced maternity, paternity, adoption, and shared parental leave, private medical and dental insurance for yourself and any dependents, and flexible working options. We strive to continually improve our working environment, and provide you with excellent facilities such as healthy food, an on‑site gym, faith rooms, terraces etc.

$136,000 - $245,000 + bonus + equity + benefits. At Google DeepMind, we are committed to equal employment opportunity under all protected classes established by law. We do not discriminate on the basis of any protected group status under any applicable law.

Research Engineer, Frontier Safety Risk Assessment London, UK; New York City, New York, US; San[...] employer: DeepMind Technologies Limited

At Google DeepMind, we pride ourselves on being an exceptional employer, offering a dynamic work culture that fosters innovation and collaboration among top-tier scientists and engineers. Our commitment to employee growth is evident through our extensive benefits package, which includes flexible working options, enhanced parental leave, and comprehensive health insurance, all designed to support a healthy work-life balance. Located in vibrant cities like London and New York City, our teams are at the forefront of AI safety research, making a meaningful impact while enjoying a supportive and inclusive environment.
D

Contact Detail:

DeepMind Technologies Limited Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Research Engineer, Frontier Safety Risk Assessment London, UK; New York City, New York, US; San[...]

✨Tip Number 1

Network like a pro! Reach out to people in the industry, attend meetups, and connect with current employees at Google DeepMind. A friendly chat can sometimes lead to opportunities that aren’t even advertised.

✨Tip Number 2

Show off your skills! Prepare a portfolio or a GitHub repository showcasing your projects related to AI and risk assessment. This gives you a chance to demonstrate your expertise beyond just a CV.

✨Tip Number 3

Ace the interview! Research common interview questions for research engineers and practice your responses. Be ready to discuss your thought process on risk assessment and how you tackle complex problems.

✨Tip Number 4

Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining the team at Google DeepMind.

We think you need these skills to ace Research Engineer, Frontier Safety Risk Assessment London, UK; New York City, New York, US; San[...]

Deep Learning
Foundation Models
Python Programming
Risk Assessment
Scenario Planning
Complex Conceptual Thinking
Research Design
Communication Skills
Machine Learning R&D
Engineering Experience
Collaboration
Adaptability
Publication Track Record

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter for the Research Engineer role. Highlight your experience with deep learning and risk assessment, and show us how your skills align with our mission at Google DeepMind.

Showcase Your Projects: Include specific examples of projects you've worked on that relate to AI safety or risk assessment. We love seeing how you've tackled complex problems and what impact your work has had in the field.

Be Clear and Concise: When writing your application, keep it straightforward. Use clear language to explain your ideas and experiences. We appreciate well-structured applications that get straight to the point!

Apply Through Our Website: Don’t forget to submit your application through our official website. It’s the best way for us to receive your details and ensures you’re considered for the role. We can’t wait to see what you bring to the table!

How to prepare for a job interview at DeepMind Technologies Limited

✨Know Your Stuff

Make sure you brush up on your deep learning and foundation models knowledge. Be ready to discuss your past research experiences and how they relate to assessing risks in AI systems. Familiarise yourself with the latest trends in machine learning and frontier risk assessment.

✨Show Your Problem-Solving Skills

Prepare to demonstrate your ability to think critically about complex problems. Think of examples where you've identified risks or developed innovative solutions in uncertain situations. This role requires a mix of engineering and conceptual thinking, so be ready to showcase both.

✨Communicate Clearly

Strong communication skills are key! Practice explaining your research insights in a way that’s accessible to technical stakeholders. Tailor your explanations to their background, ensuring they understand the implications of your work on production systems.

✨Be Ready for Collaboration

This role is all about teamwork. Prepare to discuss how you've collaborated on research projects in the past. Highlight your experience in leading or contributing to applied research, and be open to sharing ideas and learning from others in the team.

Research Engineer, Frontier Safety Risk Assessment London, UK; New York City, New York, US; San[...]
DeepMind Technologies Limited
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

D
  • Research Engineer, Frontier Safety Risk Assessment London, UK; New York City, New York, US; San[...]

    Full-Time
    100000 - 150000 ÂŁ / year (est.)
  • D

    DeepMind Technologies Limited

    1000-5000
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>