Research Engineer

Research Engineer

London Full-Time 52000 - 78000 £ / year (est.) No home office possible
A

At a Glance

  • Tasks: Join a team to research and evaluate risks of frontier AI systems.
  • Company: Be part of a cutting-edge organization focused on AI safety and risk modeling.
  • Benefits: Enjoy competitive salaries, mentorship, and a strong learning culture with dedicated reading time.
  • Why this job: Work in a collaborative environment tackling critical AI challenges with autonomy and innovation.
  • Qualifications: Experience in coding, multi-disciplinary teamwork, and understanding large language models is a plus.
  • Other info: Open to all experience levels; apply even if you don't meet every qualification!

The predicted salary is between 52000 - 78000 £ per year.

Autonomous Systems

We’re focused on loss of control risks from frontier AI systems. To address this, we’re advancing the state of the science in risk modeling, incorporating insights from other safety-critical and adversarial domains, while developing our own novel techniques. Additionally, we’re empirically evaluating these risks – building out one of the world’s largest agentic evaluation suites, as well as pushing forward the science of model evaluations, to better understand the risks and predict their materialisation. Lastly, we are developing novel mitigations that, for example, attempt to prevent models from intentionally underperforming on dangerous capability evaluations.

Role Summary

As a research engineer, you’ll work as part of a multi-disciplinary team including scientists, engineers and domain experts on the risks that we are investigating. Your team is given huge amounts of autonomy to chase research directions & build evaluations that relate to your team’s over-arching threat model. This includes coming up with ways of breaking down the space of risks, as well as designing & building ways to evaluate them. All of this is done within an extremely collaborative environment, where everyone does a bit of everything. Some of the areas we focus on include:

  • Self-replication: Researching the potential for AI systems to autonomously replicate themselves across networks and establish persistence.
  • AI R&D: Investigating AI systems’ potential to iteratively improve themselves, potentially leading to an intelligence explosion.
  • Safety sabotage: Evaluating AI systems’ potential to sabotage safety – for example by sabotaging safety research.

You’ll receive coaching from your manager and mentorship from the principal research engineer on our team. We have a very strong learning & development culture to support this, including Friday afternoons devoted to deep reading and various weekly paper reading groups.

Person Specification

You may be a good fit if you have some of the following skills, experience and attitudes. Please note that you don’t need to meet all of these criteria, and if you’re unsure, we encourage you to apply.

  • Writing production quality code.
  • Designing, shipping, and maintaining complex tech products.
  • Improving technical standards across a team, through mentoring and feedback.
  • Strong written and verbal communication skills.
  • Experience working within a multi-disciplinary team comprised of both scientists and engineers.
  • Strong understanding of large language models. This can include a broad understanding of the literature, and/or hands-on experience with things like pre-training or fine tuning LLMs.
  • Extensive Python experience, including the wider ecosystem and tooling.

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.

  • Level 3 – Total Package £65,000 – £75,000
  • Level 4 – Total Package £85,000 – £95,000
  • Level 5 – Total Package £105,000 – £115,000
  • Level 6 – Total Package £125,000 – £135,000
  • Level 7 – Total Package £145,000

#J-18808-Ljbffr

Research Engineer employer: AI Safety Institute

At our company, we pride ourselves on fostering a collaborative and innovative work culture that empowers research engineers to explore cutting-edge AI safety challenges. With a strong emphasis on professional development, including mentorship from experienced engineers and dedicated learning time, we provide an environment where your contributions can lead to meaningful advancements in the field. Located in a vibrant tech hub, we offer competitive salaries and a comprehensive benefits package, making us an exceptional employer for those passionate about shaping the future of autonomous systems.
A

Contact Detail:

AI Safety Institute Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Research Engineer

✨Tip Number 1

Familiarize yourself with the latest research in risk modeling and AI safety. Being well-versed in current literature will not only help you understand the challenges we face but also allow you to contribute meaningfully to discussions during interviews.

✨Tip Number 2

Showcase your experience with large language models, especially if you have hands-on experience with pre-training or fine-tuning. This is a key area of focus for us, and demonstrating your expertise can set you apart from other candidates.

✨Tip Number 3

Highlight any collaborative projects you've worked on that involved multi-disciplinary teams. We value teamwork highly, so sharing specific examples of how you contributed to a diverse group can make a strong impression.

✨Tip Number 4

Prepare to discuss your coding skills, particularly in Python. Be ready to talk about complex tech products you've designed or maintained, as this will demonstrate your technical proficiency and problem-solving abilities.

We think you need these skills to ace Research Engineer

Production Quality Code Writing
Complex Tech Product Design and Maintenance
Technical Standards Improvement
Mentoring and Feedback Skills
Strong Written and Verbal Communication
Multi-disciplinary Team Collaboration
Understanding of Large Language Models
Hands-on Experience with Pre-training or Fine-tuning LLMs
Extensive Python Experience
Familiarity with Python Ecosystem and Tooling
Risk Modeling Techniques
Empirical Evaluation Methods
Collaborative Research Environment Adaptability
Problem-Solving in Safety-Critical Domains

Some tips for your application 🫡

Understand the Role: Take the time to thoroughly read the job description for the Research Engineer position. Understand the key responsibilities and required skills, especially in areas like risk modeling and AI systems.

Highlight Relevant Experience: In your application, emphasize any experience you have with writing production-quality code, working in multi-disciplinary teams, and your understanding of large language models. Tailor your CV to showcase these skills.

Craft a Strong Cover Letter: Write a compelling cover letter that explains why you're interested in this role and how your background aligns with the company's focus on autonomous systems and risk evaluation. Be sure to convey your enthusiasm for collaborative research.

Proofread Your Application: Before submitting, carefully proofread your application materials. Ensure there are no grammatical errors and that your writing is clear and concise, reflecting your strong communication skills.

How to prepare for a job interview at AI Safety Institute

✨Showcase Your Technical Skills

Be prepared to discuss your experience with writing production-quality code and maintaining complex tech products. Highlight specific projects where you improved technical standards or mentored others.

✨Demonstrate Collaboration Experience

Since the role involves working in a multi-disciplinary team, share examples of how you've successfully collaborated with scientists and engineers in past projects. Emphasize your communication skills and ability to work in a team.

✨Discuss Your Understanding of AI Risks

Familiarize yourself with the concepts of self-replication, safety sabotage, and iterative improvement in AI systems. Be ready to discuss how these risks can be evaluated and mitigated, showcasing your knowledge of the field.

✨Express Your Enthusiasm for Learning

The company values a strong learning culture, so express your eagerness to engage in deep reading and participate in paper reading groups. Share any relevant experiences that demonstrate your commitment to continuous learning and development.

Research Engineer
AI Safety Institute
A
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>