At a Glance
- Tasks: Join a multi-disciplinary team to investigate risks from autonomous AI systems and develop evaluation techniques.
- Company: Be part of a cutting-edge research unit focused on AI safety and risk modeling.
- Benefits: Enjoy competitive salaries, pension options, and a strong learning culture with mentorship from top researchers.
- Why this job: Work in a collaborative environment tackling groundbreaking research with world-class experts and innovative projects.
- Qualifications: Ideal candidates have experience in deep learning, large language models, and a strong academic background.
- Other info: Flexible roles available across various seniority levels; salary ranges from £65,000 to £145,000.
The predicted salary is between 65000 - 135000 £ per year.
Autonomous Systems
We\’re focused on extreme risks from autonomous AI systems – those capable of interacting with the real world. To address this, we\’re advancing the state of the science in risk modeling, incorporating insights from other safety-critical and adversarial domains, while developing our own novel techniques. We\’re also empirically evaluating these risks – building out one of the world\’s largest agentic evaluation suites, as well as pushing forward the science of model evaluations, to better understand the risks and predict their materialisation.
Role Summary
As a research scientist, you\’ll work as part of a multi-disciplinary team including scientists, engineers and domain experts on the risks that we are investigating. Your team is given huge amounts of autonomy to chase research directions & build evaluations that relate to your team’s over-arching threat model. This includes coming up with ways of breaking down the space of risks, as well as designing & building ways to evaluate them. All of this is done within an extremely collaborative environment, where everyone does a bit of everything. Some of the areas we focus on include:
- Research and Development (R&D). Investigating AI systems\’ potential to conduct research, particularly in sensitive areas. This includes studying AI capabilities in developing dual-use technologies, unconventional weapons, and accelerating AI and hardware (GPU) development.
- Self-replication. Researching the potential for AI systems to autonomously replicate themselves across networks and studying their ability to establish persistence.
- Human influence. Assessing AI models\’ capacity to manipulate, persuade, or coerce individuals and groups. This covers techniques for general human influence, key individual manipulation, social fabric alteration, and the accumulation of social and political power.
- Dangerous resource acquisition. Examining AI models\’ ability to navigate restricted or illegal domains for acquiring resources or services. This encompasses research into general acquisition of dual-use resources, circumvention of embargoes and acquisition of human assets.
- Deceptive alignment. Evaluating AI systems\’ potential to display deceptive behaviours. This includes research into AI\’s ability to misrepresent its capabilities, conceal its true objectives, and strategically behave in ways that may not align with its actual goals or knowledge.
You’ll receive coaching from your manager and mentorship from the research directors at AISI (including Geoffrey Irving and Yarin Gal). You will also regularly interact with world-famous researchers and other incredible staff (including alumni from DeepMind, OpenAI and ML professors from Oxford and Cambridge). We have a very strong learning & development culture to support this, including Friday afternoons devoted to deep reading and various weekly paper reading groups.
Person Specification
You may be a good fit if you have some of the following skills, experience and attitudes:
- Experience working within a research team that has delivered multiple exceptional scientific breakthroughs, in deep learning (or a related field). We’re looking for evidence of an exceptional ability to drive progress.
- Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine tuning LLMs.
- Strong track-record of academic excellence (e.g. multiple spotlight papers at top-tier conferences).
- Improving scientific standards and rigour, through things like mentorship & feedback.
- Strong written and verbal communication skills.
- Experience working with world-class multi-disciplinary teams, including both scientists and engineers (e.g. in a top-3 lab).
- Acting as a bar raiser for interviews.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:
- L3: £65,000 – £75,000
- L4: £85,000 – £95,000
- L5: £105,000 – £115,000
- L6: £125,000 – £135,000
- L7: £145,000
There are a range of pension options available which can be found through the Civil Service website.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Required Experience
We select based on skills and experience regarding the following areas:
- Research problem selection
- Research science
- Writing code efficiently
- Python
- Frontier model architecture knowledge
- Frontier model training knowledge
- Model evaluations knowledge
- AI safety research knowledge
- Written communication
- Verbal communication
- Teamwork
- Interpersonal skills
- Tackle challenging problems
- Learn through coaching
Desired Experience
We additionally may factor in experience with any of the areas that our work-streams specialise in:
- Autonomous systems
- Cyber security
- Chemistry or Biology
- Safeguards
- Safety Cases
- Societal Impacts
#J-18808-Ljbffr
Research Scientist employer: AI Safety Institute
Contact Detail:
AI Safety Institute Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Research Scientist
✨Tip Number 1
Familiarize yourself with the latest research in AI safety and risk modeling. Being well-versed in current literature will not only help you understand the challenges we face but also allow you to contribute meaningfully during discussions with our multi-disciplinary team.
✨Tip Number 2
Engage with the AI research community by attending conferences or participating in online forums. Networking with professionals from top labs like DeepMind or OpenAI can provide insights into cutting-edge developments and may even lead to valuable connections.
✨Tip Number 3
Showcase your hands-on experience with large language models, particularly in pre-training or fine-tuning. Be prepared to discuss specific projects or breakthroughs you've achieved, as this will demonstrate your capability to drive progress in our research initiatives.
✨Tip Number 4
Highlight your collaborative experiences in multi-disciplinary teams. We value teamwork highly, so sharing examples of how you've successfully worked with scientists and engineers will illustrate your fit for our collaborative environment.
We think you need these skills to ace Research Scientist
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in research, particularly in deep learning and AI safety. Include specific projects or breakthroughs you've contributed to, emphasizing your role in multi-disciplinary teams.
Craft a Strong Cover Letter: In your cover letter, express your passion for researching autonomous systems and the risks associated with them. Mention any specific techniques or methodologies you are familiar with that align with the company's focus areas.
Showcase Your Communication Skills: Since strong written and verbal communication skills are essential, ensure your application materials are clear, concise, and free of jargon. Consider including examples of how you've effectively communicated complex ideas in past roles.
Highlight Collaborative Experience: Emphasize your experience working in collaborative environments. Provide examples of how you've successfully worked with scientists and engineers to tackle challenging research problems, showcasing your teamwork and interpersonal skills.
How to prepare for a job interview at AI Safety Institute
✨Showcase Your Research Experience
Be prepared to discuss your previous research projects in detail, especially those that led to significant breakthroughs. Highlight your role in these projects and how you contributed to the team's success.
✨Demonstrate Technical Proficiency
Make sure to brush up on your knowledge of large language models and frontier model architectures. Be ready to discuss your hands-on experience with pre-training or fine-tuning LLMs, as well as any coding skills, particularly in Python.
✨Emphasize Collaboration Skills
Since the role involves working in a multi-disciplinary team, be ready to share examples of how you've successfully collaborated with scientists and engineers in the past. Highlight your interpersonal skills and ability to tackle challenging problems together.
✨Prepare for Behavioral Questions
Expect questions that assess your ability to learn through coaching and mentorship. Think of specific instances where you received feedback and how you applied it to improve your work or the work of others.