At a Glance
- Tasks: Lead AI safety research and develop strategies for biological and physical risks.
- Company: Join Lila Sciences, a pioneering platform in scientific superintelligence.
- Benefits: Competitive salary, equity options, and a chance to shape the future of science.
- Why this job: Make a real impact on global challenges through cutting-edge AI research.
- Qualifications: PhD in biological or physical sciences and experience in scientific computing.
- Other info: Work in a dynamic environment with opportunities for collaboration and growth.
We’re building a talent-dense, high-agency AI safety team at Lila that will engage all core teams within the organization (science, model training, lab integration, etc.) to prepare for risks from scientific superintelligence. The initial focus of this team will be to build and implement a bespoke safety strategy for Lila, tailored to its specific goals and deployment strategies. This will involve technical safety strategy development, broader ecosystem engagement, as well as developing technical collateral including risk‑ and capability‑focused evaluations and safeguards.
What You’ll Be Building
- Set the build and research strategy for Lila’s safety approach to biological / physical risks.
- Design and build capability evaluations to test for scientific risks (both known but especially novel) from cutting edge scientific models integrated with automated physical labs, across biological / physical sciences.
- Coordinate and lead threat modelling exercises with internal and external scientific experts, including monitoring for emerging technologies and use‑cases.
- Develop and curate high‑quality training and test data for evals and safety systems.
- Evaluate risks from Lila’s capabilities, including through interactions with the wider ecosystem of capabilities (e.g., general‑purpose frontier models as well as narrow scientific tools).
- Contributing to broader, high‑quality research efforts – as and when needed – for scientific capability evaluation and restriction.
- Contribute to external communications on Lila’s safety efforts.
What You’ll Need to Succeed
- A PhD in either a biological sciences domain (e.g., molecular biology, virology, computational biology or related fields) or a physical sciences domain (materials sciences, physics, chemistry, chemical or nuclear engineering, or related fields), or other related experience.
- Demonstrated ability to set research directions for open problems in dual‑use risks in the biological / physical sciences.
- Experience in scientific computing, across either biological or physical sciences.
- Familiarity with dual‑use research and dissemination concerns, across the relevant safety / regulatory / governance frameworks (e.g., export control frameworks, biological and chemical‑related conventions and controls).
- Ability to communicate complex technical concepts and concerns to non‑expert audiences effectively.
- Demonstrated ability to lead teams of internal and external collaborators in buildout of Lila’s point‑of‑view on biological / physical risks.
- Demonstrated ability to deal with cross‑functional stakeholders (science, AI, product, policy) in a complex environment.
Bonus Points For
- Experience in developing or applying ML to biological or physical sciences.
- Experience in building evaluations, or conducting red‑teaming exercises, across scientific risks for frontier models / narrow scientific tools.
Location
This position may be based in any of Lila's offices, including Cambridge (MA), San Francisco (CA), or London (UK).
About Lila
Lila Sciences is the world’s first scientific superintelligence platform and autonomous lab for life, chemistry, and materials science. We are pioneering a new age of boundless discovery by building the capabilities to apply AI to every aspect of the scientific method. We are introducing scientific superintelligence to solve humankind's greatest challenges, enabling scientists to bring forth solutions in human health, climate, and sustainability at a pace and scale never experienced before.
Compensation
For US‑based candidates (Cambridge or San Francisco), we expect the base salary for this role to fall between $268,000 – $384,000 USD per year, along with bonus potential and generous early equity. The final offer will reflect your unique background, expertise, and impact. For UK‑based candidates, compensation will be determined separately and will be aligned with local market benchmarks and internal leveling.
Equal Opportunity
Lila Sciences is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Information you provide during your application process will be handled in accordance with our Candidate Privacy Policy.
Senior/ Principal Research Scientist, AI Safety, Biological/ Physical Sciences in London employer: Lila Sciences
Contact Detail:
Lila Sciences Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior/ Principal Research Scientist, AI Safety, Biological/ Physical Sciences in London
✨Tip Number 1
Network like a pro! Reach out to folks in the AI safety and biological/physical sciences fields. Attend relevant meetups or conferences, and don’t be shy about introducing yourself. You never know who might have a lead on your dream job!
✨Tip Number 2
Show off your expertise! Prepare a portfolio or presentation that highlights your past research and projects related to AI safety. This will not only demonstrate your skills but also give you something tangible to discuss during interviews.
✨Tip Number 3
Practice makes perfect! Conduct mock interviews with friends or mentors in the field. Focus on articulating complex concepts clearly, as you'll need to communicate effectively with both technical and non-technical audiences.
✨Tip Number 4
Apply through our website! We’re always on the lookout for passionate individuals who want to make a difference in AI safety. Don’t hesitate to submit your application directly; it’s the best way to get noticed by our team!
We think you need these skills to ace Senior/ Principal Research Scientist, AI Safety, Biological/ Physical Sciences in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your relevant experience in biological or physical sciences. We want to see how your background aligns with our mission at Lila, so don’t hold back on showcasing your skills!
Showcase Your Research Experience: Since this role involves setting research directions, be sure to detail any past projects or research you've led. We love seeing how you’ve tackled complex problems, especially in dual-use risks, so share those stories!
Communicate Clearly: Remember, we’re looking for someone who can explain complex concepts to non-experts. Use clear language in your application to demonstrate your ability to communicate effectively across different audiences.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team!
How to prepare for a job interview at Lila Sciences
✨Know Your Science Inside Out
Make sure you have a solid grasp of both biological and physical sciences, especially in relation to dual-use risks. Brush up on recent advancements and be ready to discuss how they might impact AI safety.
✨Prepare for Technical Discussions
Expect to dive deep into technical safety strategies and evaluations. Familiarise yourself with risk assessment frameworks and be prepared to explain complex concepts clearly, as you'll need to communicate effectively with non-experts.
✨Showcase Your Leadership Skills
Be ready to share examples of how you've led teams or coordinated with cross-functional stakeholders in the past. Highlight your experience in managing collaborations and driving research directions in complex environments.
✨Engage with Current Trends
Stay updated on emerging technologies and their implications for AI safety. Bring insights into how these trends could affect Lila's safety approach, demonstrating your proactive engagement with the scientific community.