At a Glance
- Tasks: Join the Safeguard Analysis Team to research and develop AI security interventions.
- Company: AI Safety Institute, a leader in AI safety research.
- Benefits: Competitive salary, pension options, mentorship, and collaboration with top researchers.
- Other info: Opportunities for all levels, from Junior to Principal roles.
- Why this job: Make a real impact on AI safety while working with world-class experts.
- Qualifications: Experience in AI, security, or related fields; strong Python skills.
The predicted salary is between 65000 - 75000 £ per year.
Role Description The AI Safety Institute research unit is looking for exceptionally motivated and talented people to join its Safeguard Analysis Team. Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Safety Institute’s Safeguard Analysis Team researches such interventions, which it refers to as \’safeguards\’, evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future. The Safeguard Analysis Team takes a broad view of security threats and interventions. It\’s keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models, but is also keen to hire security researchers who have historically worked outside of AI, such as in – non-exhaustively – computer security, information security, web technology policy, and hardware security. Diverse perspectives and research interests are welcomed. The Team seeks people with skillsets leaning in the direction of either or both of Research Scientist and Research Engineer, recognising that some technical staff may prefer work that spans or alternates between engineering and research responsibilities. The Team\’s priorities include research-oriented responsibilities – like assessing the threats to frontier systems and developing novel attacks – and engineering-oriented ones, such as building infrastructure for running evaluations. In this role, you’ll receive mentorship and coaching from your manager and the technical leads on your team. You\’ll also regularly interact with world-famous researchers and other incredible staff, including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge. In addition to Junior roles, Senior, Staff and Principal RE positions are available for candidates with the required seniority and experience. Person Specification You may be a good fit if you have some of the following skills, experience and attitudes: Experience working on machine learning, AI, AI security, computer security, information security, or some other security discipline in industry, in academia, or independently. Experience working with a world-class research team comprised of both scientists and engineers (e.g. in a top-3 lab). Red-teaming experience against any sort of system. Strong written and verbal communication skills. Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine-tuning LLMs. Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling. Ability to work in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem. Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done. Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Writing production quality code. Improving technical standards across a team through mentoring and feedback. Designing, shipping, and maintaining complex tech products. Salary
Research Scientist/Research Engineer- Safeguards employer: AI Safety Institute
Contact Detail:
AI Safety Institute Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Research Scientist/Research Engineer- Safeguards
✨Tip Number 1
Network like a pro! Reach out to people in the AI and security fields on LinkedIn or at conferences. Don’t be shy; ask for informational interviews to learn more about their work and share your passion for safeguards.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects related to AI safety, machine learning, or security. This could be anything from code snippets to research papers. It’s a great way to demonstrate your expertise beyond just a CV.
✨Tip Number 3
Prepare for those interviews! Research common questions related to AI safety and security, and practice articulating your thoughts clearly. Remember, they want to see how you think and solve problems, so be ready to discuss your approach.
✨Tip Number 4
Apply through our website! We’re always on the lookout for talented individuals who are passionate about safeguarding AI systems. Don’t hesitate to submit your application and let us know how you can contribute to our mission!
We think you need these skills to ace Research Scientist/Research Engineer- Safeguards
Some tips for your application 🫡
Show Your Passion: When writing your application, let your enthusiasm for AI safety and research shine through. We want to see that you’re genuinely excited about the role and the impact you can make on safeguarding AI systems.
Tailor Your Experience: Make sure to highlight relevant experiences that align with the job description. Whether it’s your work in machine learning or security, we want to see how your background fits into our Safeguard Analysis Team.
Be Clear and Concise: We appreciate strong written communication skills, so keep your application clear and to the point. Avoid jargon unless necessary, and make sure your ideas flow logically to showcase your thought process.
Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to receive your details and ensures you’re considered for the role you’re interested in.
How to prepare for a job interview at AI Safety Institute
✨Know Your Stuff
Make sure you brush up on your knowledge of large language models and AI security. Familiarise yourself with the latest research and developments in the field, especially around safeguards and security threats. This will not only help you answer technical questions but also show your genuine interest in the role.
✨Showcase Your Experience
Prepare to discuss your past experiences in detail, especially any red-teaming or security research you've done. Be ready to share specific examples of challenges you've faced and how you tackled them. This will demonstrate your problem-solving abilities and your hands-on experience in the field.
✨Communicate Clearly
Strong written and verbal communication skills are key for this role. Practice explaining complex concepts in simple terms, as you may need to communicate your ideas to colleagues from diverse backgrounds. Consider preparing a few concise explanations of your previous projects to illustrate your points during the interview.
✨Be a Team Player
The Safeguard Analysis Team values collaboration, so be prepared to discuss how you've worked effectively in teams before. Highlight your willingness to support colleagues and share knowledge, as well as any mentoring experiences you've had. This will show that you're not just focused on your own success but also on the team's overall performance.