Research Scientist, Multimodal Alignment, Safety, and Fairness
Research Scientist, Multimodal Alignment, Safety, and Fairness

Research Scientist, Multimodal Alignment, Safety, and Fairness

London Full-Time 100000 - 140000 £ / year (est.) No home office possible
Go Premium
The Rundown AI, Inc.

At a Glance

  • Tasks: Join us to design experiments and improve AI safety and alignment.
  • Company: Be part of Google DeepMind, a leader in advancing AI for public benefit.
  • Benefits: Enjoy competitive salary, bonuses, equity, and flexible work options.
  • Why this job: Make a real-world impact on AI fairness and safety while collaborating with diverse experts.
  • Qualifications: PhD in Computer Science and strong publication record in top conferences required.
  • Other info: Diversity and inclusion are core values; we welcome all backgrounds.

The predicted salary is between 100000 - 140000 £ per year.

SnapshotWe are seeking strong Research Scientists with expertise in AI research and experience in interdisciplinary sociotechnical modeling to join a multimodal safety research effort within Google DeepMind\’s Frontier AI unit. This role requires a passion for understanding and modeling the interactions between AI and society, a strong awareness of the AI alignment and safety landscape, and a penchant for developing novel ideas, methods, interfaces, and tools.

This is a unique opportunity to contribute to impactful research and advance Google DeepMind\’s mission towards Artificial General Intelligence (AGI).

About usArtificial Intelligence could be one of humanity\’s most useful inventions. At Google DeepMind, we\’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence and ultimately achieve Artificial General Intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

We\’re a dedicated scientific community, committed to \”solving intelligence\” and ensuring our technology is used for widespread public benefit.

We\’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don\’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals.

Our team is a part of Google DeepMind\’s Frontier AI unit. We have a mission to advance the frontiers of safety and inclusion in multimodal AI, build new capabilities into Google\’s flagship models and products, and break new ground on AI alignment. We approach alignment research with an ecosystem view, partnering across the development and deployment cycle and grounding our work in real-world impact on global users and communities. We are a research team with a mandate to invest in longer term bets and explore innovative approaches that deliver breakthrough improvements to models (Gemini) and products. Our work is at the frontier of augmented oversight for multimodal AI, and our research advances have directly impacted multiple versions of Gemini and Nano Banana models.

The roleWe are seeking strong Research Scientists with expertise in AI research and experience in interdisciplinary sociotechnical modeling, to lead new breakthrough research directions.

You will join a team working to supercharge exploration, assessment, and steering of evolving AI behaviors, with a focus on subjective and creative tasks. You will tackle the underlying research questions to improve collaborative specification of alignment objectives and assessment of adherence to desired behaviors. You will research new methods to enable AI agents to monitor real-world social context and dynamically evaluate and evolve system behaviors over long time-horizons. You\’ll develop new paradigms for human+AI rating that considers systemic behaviors, adapts to human feedback, and proactively seeks context.

Research Scientists at Google DeepMind lead the development of novel tools and algorithms to solve Artificial General Intelligence. Joining from top academic or industrial labs, they collaborate across fields to tackle fundamental AI questions using expertise in deep learning, computer vision, and generative architectures. This role requires independent judgment to navigate complex, ambiguous problems and explore diverse technical avenues. Your work will drive breakthroughs within GDM, Google products, and the AI alignment community.

Key responsibilities

Research: Generate new ideas, keep up with the state of the art in the field, discuss research directions with other researchers.

Execute: Design, rapidly implement, and rigorously evaluate cutting-edge ideas, methods, interfaces, and tools to explore new sociotechnical AI systems.

Communicate: Report and present research findings and developments clearly and efficiently both internally and externally, verbally and in writing.

Collaborate: Suggest and engage in inter- and intra- team collaborations to meet ambitious research goals, while also driving significant individual contributions.

Drive technical projects: Take ownership of substantial technical projects, from ideation and design to implementation and evaluation, often involving cross-functional collaboration.

About youIn order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience:

Requirements:

PhD degree in Computer Science, Machine Learning, or a related technical field.

Strong publication record in top machine learning conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, ECCV).

Demonstrated hands-on experience in developing multimodal AI models and systems,

Strong programming skills in Python and experience with at least one major deep learning framework (e.g., JAX/Flax/Gemax)

Experience conducting independent research and development, including experimental design, implementation, and analysis.

In addition, the following would be an advantage:

Proven expertise in working with and tuning large-scale vision language models.

Experience prototyping with VLMs with modern prompting strategies

Experience finetuning and post-training LLMs using RL

Experience with developing agentic AI solutions to complex problems

Excited to collaborate across orgs and disciplines to leverage diverse perspectives and expertise and find creative solutions.

Interest and a strong awareness of the AI alignment / safety / responsibility / fairness landscape

Experience with Generative AI techniques and architectures.

Familiarity with Reinforcement Learning or alignment methods.

Experience with multimodal learning, integrating information from different data types (e.g., vision, audio, text).

The US base salary range for this full-time position is between 147,000 USD – 211,000 USD + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

#J-18808-Ljbffr

Research Scientist, Multimodal Alignment, Safety, and Fairness employer: The Rundown AI, Inc.

At Google DeepMind, we pride ourselves on being an exceptional employer, fostering a collaborative and inclusive work culture that prioritises safety, fairness, and innovation in artificial intelligence. Located in the vibrant tech hub of Mountain View, CA, our team offers unparalleled opportunities for professional growth, access to cutting-edge research, and the chance to make a meaningful impact on global communities. With competitive salaries, comprehensive benefits, and a commitment to diversity, we empower our employees to thrive both personally and professionally.
The Rundown AI, Inc.

Contact Detail:

The Rundown AI, Inc. Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Research Scientist, Multimodal Alignment, Safety, and Fairness

Tip Number 1

Familiarise yourself with the latest research in AI alignment, safety, and fairness. This will not only help you understand the current landscape but also allow you to engage in meaningful conversations during interviews.

Tip Number 2

Network with professionals in the field by attending conferences or workshops related to machine learning and AI. Building connections can provide insights into the company culture and potentially lead to referrals.

Tip Number 3

Showcase your experience with modern deep learning frameworks by working on personal projects or contributing to open-source initiatives. This practical experience can set you apart from other candidates.

Tip Number 4

Prepare to discuss your research methodologies and findings in detail. Being able to articulate your thought process and the impact of your work will demonstrate your expertise and passion for the role.

We think you need these skills to ace Research Scientist, Multimodal Alignment, Safety, and Fairness

PhD in Computer Science or related field
Strong publication record in top machine learning conferences
Experience with modern deep learning frameworks (e.g., TensorFlow, JAX, PyTorch)
Prototyping with Vision Language Models (VLMs)
Modern prompting strategies for reasoning
Finetuning and post-training of Large Language Models (LLMs) using Reinforcement Learning (RL)
Development of agentic AI solutions
Collaboration across organisations and disciplines
Creative problem-solving skills
Interest in AI alignment, fairness, safety, and responsibility
Experimental design and execution
Research and development of novel techniques for agent-human interaction
Modeling expected user behaviour and loss pattern identification

Some tips for your application 🫡

Understand the Role: Thoroughly read the job description for the Research Scientist position. Familiarise yourself with the key responsibilities and required skills, particularly in areas like agentic techniques, model safety, and alignment.

Highlight Relevant Experience: In your CV and cover letter, emphasise your PhD in Computer Science or related fields, along with your publication record in top machine learning conferences. Make sure to mention any experience with deep learning frameworks like TensorFlow, JAX, or PyTorch.

Showcase Your Research Skills: Detail your experience in designing and executing experiments, as well as any novel techniques you've developed for agent-human interaction workflows. This is crucial for demonstrating your fit for the role.

Express Your Passion for AI Ethics: Convey your interest in AI alignment, fairness, safety, and responsibility in your application. Discuss how you can contribute to the mission of advancing fairness and inclusion in multimodal AI.

How to prepare for a job interview at The Rundown AI, Inc.

Showcase Your Research Experience

Be prepared to discuss your previous research projects in detail, especially those related to AI safety and fairness. Highlight your publication record and any innovative techniques you've developed, as this will demonstrate your expertise and fit for the role.

Familiarise Yourself with Current Trends

Stay updated on the latest advancements in multimodal AI and agentic techniques. Being able to discuss recent papers or breakthroughs during your interview will show your passion for the field and your commitment to ongoing learning.

Prepare for Technical Questions

Expect technical questions related to deep learning frameworks like TensorFlow, JAX, or PyTorch. Brush up on your knowledge of these tools and be ready to explain how you've used them in your past work, particularly in relation to model safety and alignment.

Demonstrate Collaborative Spirit

Since the role involves collaboration across various disciplines, be ready to share examples of how you've successfully worked in teams. Emphasise your ability to leverage diverse perspectives to solve complex problems, which aligns with the company's mission.

Research Scientist, Multimodal Alignment, Safety, and Fairness
The Rundown AI, Inc.
Location: London
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>