At a Glance
- Tasks: Join a cutting-edge team studying AI risks and model behaviour.
- Company: World's leading AI Security Institute with direct government ties.
- Benefits: Competitive salary, generous leave, hybrid work options, and professional development support.
- Why this job: Make a real impact on AI governance and safety at a global level.
- Qualifications: 3+ years in quantitative research, strong Python skills, and knowledge of AI models.
- Other info: Collaborate with top experts and enjoy a dynamic, supportive work environment.
The predicted salary is between 65000 - 145000 £ per year.
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We work with frontier developers and governments globally. Our resources, unique agility and international influence make this the best place to shape both AI development and government action.
Role Summary
Within the Cyber & Autonomous Systems Team (CAST) at AISI, the Propensity project studies unprompted or unintended model behaviour, particularly potentially dangerous behaviour: the propensity of a model to cause harm. Our current project is to study effect sizes of environmental factors on these propensities, e.g. whether models are consistently more willing to take harmful actions when their existence is threatened. Understanding model propensities is the key missing pillar in our overall picture of risk from autonomous AI.
What we are looking for
The Propensity project team currently consists of one research scientist and two research engineers. We are looking to add a second research scientist to help with challenges like those above, through discussion, written plans and designs, and writing or reviewing code that implements those designs.
The ideal candidate will have the following skills:
- A proven ability to identify and operationalise key uncertainties in a research area, and propose and improve on experimental approaches for collecting evidence on these uncertainties.
- Knowledge of and experience in selecting and applying statistical inference methods in order to draw risk-relevant and action-guiding conclusions from experimental evidence.
- Ability to engage critically with existing or proposed research methodology, assessing to what extent such critiques impact the central conclusions of the work.
- Strong enough Python knowledge to get hands-on with developing and iterating on our Inspect tasks.
- A sufficient understanding of transformer architecture and training dynamics to inform interpretations and predictions of their observable behaviour.
We expect these skills will be held by people with:
- 3+ years of experience in a quantitative research discipline involving experimental design and analysis.
- Experience writing Python code meeting quality standards.
- Professional or educational contact with LLMs and transformer theory.
What We Offer
- Impact you couldn't have anywhere else.
- Incredibly talented, mission-driven and supportive colleagues.
- Direct influence on how frontier AI is governed and deployed globally.
- Opportunity to shape the first & best-resourced public-interest research team focused on AI security.
- Resources & access including pre-release access to multiple frontier models and ample compute.
- Hybrid working, flexibility for occasional remote work abroad and stipends for work-from-home equipment.
- At least 25 days' annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering.
- Generous paid parental leave.
- On top of your salary, we contribute 28.97% of your base salary to your pension.
Selection process
The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI, and culminate in a conversation with members of the senior leadership team here at AISI.
Research Scientist, Propensity, Cyber and Autonomous Systems Team in London employer: Aisafety
Contact Detail:
Aisafety Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Research Scientist, Propensity, Cyber and Autonomous Systems Team in London
✨Tip Number 1
Network like a pro! Reach out to people in the AI and research community, especially those connected to the Cyber & Autonomous Systems Team. A friendly chat can open doors that applications alone can't.
✨Tip Number 2
Show off your skills! Prepare a portfolio or a GitHub repository showcasing your Python projects and any relevant research. This gives you a chance to demonstrate your expertise beyond just words on a CV.
✨Tip Number 3
Ace the interview prep! Familiarise yourself with the latest trends in AI security and be ready to discuss how your experience aligns with the Propensity project. We want to see your passion and knowledge shine through!
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets the attention it deserves. Plus, we love seeing candidates who take the initiative to connect directly with us.
We think you need these skills to ace Research Scientist, Propensity, Cyber and Autonomous Systems Team in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your relevant experience and skills that match the role. We want to see how your background aligns with our mission at AISI, so don’t hold back on showcasing your expertise!
Showcase Your Research Skills: Since this role is all about understanding AI risks, be sure to include specific examples of your research experience. Talk about any projects where you’ve identified uncertainties or applied statistical methods – we love seeing that kind of detail!
Be Clear and Concise: When writing your application, clarity is key! Use straightforward language and avoid jargon unless it’s necessary. We appreciate a well-structured application that gets straight to the point without fluff.
Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to receive your details and ensures you’re considered for the role. Plus, it’s super easy to do!
How to prepare for a job interview at Aisafety
✨Know Your Stuff
Make sure you brush up on your knowledge of AI risks, particularly around model behaviour and propensity. Familiarise yourself with the latest research in the field, as well as the specific methodologies used in the Propensity project. This will help you engage in meaningful discussions during the interview.
✨Show Off Your Python Skills
Since strong Python knowledge is a must, be prepared to discuss your coding experience. Bring examples of your work, especially any projects that involved statistical inference or experimental design. If you can, practice writing some code snippets beforehand to demonstrate your proficiency.
✨Prepare for Technical Questions
Expect technical proficiency tests and questions about transformer architecture and training dynamics. Review key concepts and be ready to explain how they relate to the work you'll be doing. This shows that you not only understand the theory but can also apply it practically.
✨Engage with the Team
During the interview, remember that it's not just about answering questions; it's also about showing how you can fit into the team. Be prepared to discuss how you would collaborate with others, share ideas, and contribute to the overall mission of the Cyber & Autonomous Systems Team.