At a Glance
- Tasks: Design and implement safety strategies for AI in biological and physical sciences.
- Company: Join Lila Sciences, a pioneering platform for scientific superintelligence.
- Benefits: Competitive salary, equity options, and a dynamic work environment.
- Other info: Work in a collaborative team with excellent growth opportunities across global offices.
- Why this job: Make a real impact on AI safety and contribute to groundbreaking scientific advancements.
- Qualifications: PhD in biological or physical sciences and experience in scientific computing.
The predicted salary is between 120000 - 200000 £ per year.
Overview
Your Impact at Lila
We're building a talent-dense, high-agency AI safety team at Lila that will engage all core teams within the organization (science, model training, lab integration, etc.), to prepare for risks from scientific superintelligence. The initial focus of this team will be to build and implement a bespoke safety strategy for Lila, tailored to its specific goals and deployment strategies. This will involve technical safety strategy development, broader ecosystem engagement, as well as developing technical collateral including risk- and capability-focused evaluations and safeguards.
What You'll Be Building
- Design and build capability evaluations to test for scientific risks (both known but especially novel) from cutting edge scientific models integrated with automated physical labs, across biological / physical sciences.
- Coordinate and lead threat modelling exercises with internal and external scientific experts, including monitoring for emerging technologies and use-cases.
- Develop and curate high-quality training and test data for evals and safety systems.
- Evaluate risks from Lila's capabilities, including through interactions with the wider ecosystem of capabilities (e.g. general-purpose frontier models as well as narrow scientific tools).
- Contributing to broader, high-quality research efforts - as and when needed - for scientific capability evaluation and restriction.
- Contribute to external communications on Lila's safety efforts.
What You'll Need To Succeed
- A PhD in either a biological sciences domain (e.g., molecular biology, virology, computational biology or related fields) or a physical sciences domain (materials sciences, physics, chemistry, chemical or nuclear engineering, or related fields), or other related experience.
- Experience in scientific computing, across either biological or physical sciences.
- Familiarity with dual-use research and dissemination concerns, across the relevant safety / regulatory / governance frameworks (e.g. export control frameworks, biological and chemical-related conventions and controls).
- Ability to communicate complex technical concepts and concerns to non-expert audiences effectively.
- Demonstrated ability to lead teams of internal and external collaborators in buildout of Lila's point-of-view on biological / physical risks.
- Demonstrated ability to deal with cross-functional stakeholders (science, AI, product, policy) in a complex environment.
Bonus Points For
- Experience in developing or applying ML to biological or physical sciences.
- Experience in building evaluations, or conducting red-teaming exercises, across scientific risks for frontier models / narrow scientific tools.
Location
This position may be based in any of Lila's offices, including Cambridge (MA), San Francisco (CA), or London (UK).
About Lila
Lila Sciences is the world's first scientific superintelligence platform and autonomous lab for life, chemistry, and materials science. We are pioneering a new age of boundless discovery by building the capabilities to apply AI to every aspect of the scientific method. We are introducing scientific superintelligence to solve humankind's greatest challenges, enabling scientists to bring forth solutions in human health, climate, and sustainability at a pace and scale never experienced before.
If this sounds like an environment you'd love to work in, even if you only have some of the experience listed below, we encourage you to apply.
Compensation
For US-based candidates (Cambridge or San Francisco), we expect the base salary for this role to fall between $176,000–$304,000 USD per year, along with bonus potential and generous early equity. The final offer will reflect your unique background, expertise, and impact.
For UK-based candidates, compensation will be determined separately and will be aligned with local market benchmarks and internal leveling.
We're All In
Lila Sciences is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Information you provide during your application process will be handled in accordance with our Candidate Privacy Policy.
Research Scientist I/II, AI Safety, Biological/ Physical Sciences in London employer: Lila Sciences
Contact Detail:
Lila Sciences Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Research Scientist I/II, AI Safety, Biological/ Physical Sciences in London
✨Tip Number 1
Network like a pro! Reach out to people in the AI safety and scientific communities. Attend events, join online forums, and connect with Lila's team on LinkedIn. Building relationships can open doors that applications alone can't.
✨Tip Number 2
Prepare for interviews by diving deep into Lila's mission and projects. Understand their approach to AI safety and think about how your skills can contribute. Tailor your responses to show how you can help them tackle scientific risks.
✨Tip Number 3
Showcase your expertise! Bring examples of your past work, especially any relevant research or projects. Be ready to discuss how you've tackled complex problems in biological or physical sciences, and how that experience aligns with Lila's goals.
✨Tip Number 4
Don't just apply—engage! Use our website to apply and follow up with a personal message expressing your enthusiasm for the role. Let them know why you're excited about contributing to Lila's groundbreaking work in AI safety.
We think you need these skills to ace Research Scientist I/II, AI Safety, Biological/ Physical Sciences in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your relevant experience in biological or physical sciences. We want to see how your background aligns with the specific needs of our AI safety team at Lila.
Showcase Your Communication Skills: Since you'll be dealing with complex concepts, it's crucial to demonstrate your ability to communicate these ideas clearly. Use examples from your past experiences where you've successfully explained technical details to non-experts.
Highlight Team Leadership Experience: We value collaboration, so if you've led teams or coordinated with various stakeholders, make sure to mention that. Share specific instances where you guided a project or initiative, especially in a cross-functional environment.
Apply Through Our Website: We encourage you to submit your application directly through our website. This ensures that your application gets the attention it deserves and helps us streamline the process. Plus, it’s super easy!
How to prepare for a job interview at Lila Sciences
✨Know Your Science
Make sure you brush up on the latest developments in both biological and physical sciences. Be ready to discuss how your expertise can contribute to Lila's safety strategy, especially regarding novel scientific risks.
✨Communicate Clearly
Practice explaining complex technical concepts in simple terms. You’ll likely need to communicate with non-expert audiences, so being able to break down your ideas will show your versatility and understanding.
✨Showcase Collaboration Skills
Prepare examples of how you've successfully led teams or collaborated with cross-functional stakeholders in the past. Highlighting your ability to work with diverse groups will be key, as this role involves coordinating with various experts.
✨Stay Informed on Regulations
Familiarise yourself with dual-use research and relevant safety frameworks. Being knowledgeable about these regulations will demonstrate your readiness to navigate the complexities of AI safety in a scientific context.