At a Glance
- Tasks: Join our AI safety team to develop innovative safety strategies for cutting-edge scientific models.
- Company: Lila Sciences, a pioneering platform in scientific superintelligence and autonomous labs.
- Benefits: Competitive salary, inclusive culture, and opportunities for groundbreaking research.
- Why this job: Make a real impact on AI safety and contribute to solving global challenges.
- Qualifications: Bachelor's degree in a technical field and strong programming skills in Python.
- Other info: Dynamic environment with opportunities for career growth and collaboration across teams.
The predicted salary is between 36000 - 60000 ÂŁ per year.
We're building a talent-dense, high-agency AI safety team at Lila that will engage all core teams within the organization (science, model training, lab integration, etc.), to prepare for risks from scientific superintelligence. The initial focus of this team will be to build and implement a bespoke safety strategy for Lila, tailored to its specific goals and deployment strategies. This will involve technical safety strategy development, broader ecosystem engagement, as well as developing technical collateral including risk- and capability-focused evaluations and safeguards.
What You’ll Be Building
- Evaluations to test for scientific risks (both known but especially novel) from cutting edge scientific models integrated with automated physical labs.
- Initial proof‑of‑concept safeguards, such as ML models to detect and block unsafe behaviour from scientific AI models, as well as from physical lab outputs.
- Understanding of a range of model capabilities, across primarily scientific but also non‑scientific domains (e.g. persuasion, deception) to inform Lila's broader safety strategy.
- Broader, high‑quality research efforts – as and when needed – for scientific capability evaluation and restriction.
What You’ll Need to Succeed
- Bachelor's degree in a technical field (e.g., computer science, engineering, machine learning, mathematics, physics, statistics), or related experience.
- Strong programming skills in Python, and experience with ML frameworks (including, for instance, Inspect) for large‑scale evaluation and scaffolded testing.
- Experience in building evaluations, or conducting red‑teaming exercises, for CBRN / cyber risks (or for frontier model capabilities more generally, including both unsafe and benign capabilities).
- Experience in designing and/or implementing (directly or through consultation) AI safety frameworks for frontier AI companies.
- Ability to communicate complex technical concepts and concerns to non‑expert audiences effectively.
Bonus Points For
- Masters or PhD in a field relevant to safety evaluations of AI models in scientific domains, or a technical field.
- Publications in AI safety / evaluations / model behaviour in top ML / AI conferences (NeurIPS, ICML, ICLR, ACL) or model release system cards.
- Experience researching risks from novel science (e.g. biosecurity, computational biology, etc.) or working with narrow scientific tools (e.g. large scale foundation models for science).
About Lila
Lila Sciences is the world’s first scientific superintelligence platform and autonomous lab for life, chemistry, and materials science. We are pioneering a new age of boundless discovery by building the capabilities to apply AI to every aspect of the scientific method. We are introducing scientific superintelligence to solve humankind's greatest challenges, enabling scientists to bring forth solutions in human health, climate, and sustainability at a pace and scale never experienced before. Learn more about this mission at www.lila.ai.
Equal Employment Opportunity
Lila Sciences is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
Scientist/Sr. Scientist, AI Safety employer: Lila Sciences
Contact Detail:
Lila Sciences Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Scientist/Sr. Scientist, AI Safety
✨Tip Number 1
Network like a pro! Reach out to people in the AI safety field, attend relevant meetups or conferences, and don’t be shy about sharing your passion for the role. Building connections can lead to opportunities that aren’t even advertised.
✨Tip Number 2
Show off your skills! Prepare a portfolio or a GitHub repository showcasing your programming projects, especially those related to AI safety. This gives potential employers a tangible sense of what you can bring to the table.
✨Tip Number 3
Practice makes perfect! Get ready for interviews by rehearsing answers to common questions in the AI safety domain. Think about how you’d explain complex concepts to non-experts, as communication is key in this field.
✨Tip Number 4
Apply through our website! We love seeing candidates who are genuinely interested in Lila. Tailor your application to highlight how your experience aligns with our mission and the specific role, and don’t forget to follow up after applying!
We think you need these skills to ace Scientist/Sr. Scientist, AI Safety
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your relevant experience in AI safety and technical skills. We want to see how your background aligns with our mission at Lila!
Showcase Your Skills: Don’t hold back on showcasing your programming prowess, especially in Python and ML frameworks. We’re keen to see examples of your work that demonstrate your ability to tackle scientific risks.
Communicate Clearly: Remember, we value the ability to explain complex concepts simply. Use clear language in your application to show us you can bridge the gap between technical details and broader implications.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures it gets into the right hands quickly!
How to prepare for a job interview at Lila Sciences
✨Know Your Stuff
Make sure you brush up on your technical knowledge, especially around AI safety frameworks and evaluation methods. Be ready to discuss your experience with Python and ML frameworks, as well as any relevant projects you've worked on that relate to scientific risks.
✨Communicate Clearly
Since you'll need to explain complex concepts to non-experts, practice simplifying your technical jargon. Think about how you can break down your past experiences in a way that anyone can understand, highlighting the impact of your work.
✨Showcase Your Research
If you have publications or research related to AI safety or evaluations, be sure to mention them. Prepare to discuss your findings and how they could apply to Lila's mission. This will demonstrate your expertise and commitment to the field.
✨Engage with Their Mission
Familiarise yourself with Lila's goals and the challenges they aim to tackle. During the interview, express your enthusiasm for their mission and how your skills can contribute to their vision of pioneering scientific superintelligence.