AI Safety Investigator

AI Safety Investigator

Full-Time 72000 - 120000 Β£ / year (est.) Home office (partial)
Go Premium
T

At a Glance

  • Tasks: Investigate AI safety practices and document incidents at major corporations.
  • Company: Join the Future of Life Institute, a non-profit focused on reducing risks from transformative technologies.
  • Benefits: Enjoy health insurance, 24+ days PTO, paid parental leave, and remote work allowances.
  • Why this job: Make a real impact in AI safety while working with industry leaders and innovative thinkers.
  • Qualifications: Self-directed, detail-oriented, with strong communication skills and a background in journalism or research preferred.
  • Other info: Diverse candidates are encouraged to apply; rolling applications considered after the deadline.

The predicted salary is between 72000 - 120000 Β£ per year.

The Future of Life Institute (FLI) is hiring an AI Safety Investigatorto bring the spirit of investigative journalism to FLI. The investigator will document safety practices at the industry\’s most powerful corporations, explain incidents and best practices to the general public and help FLI incentivise a race to the top on safety. In this high-impact role, you\’ll prepare and build out our semiannual AI Safety Index and conduct investigative deep-dives into corporate AI safety practices.The AI Safety Investigator will report to FLI\’s Head of EU Policy and Research and work closely with our President Max Tegmark.

FLI works to reduce global catastrophic risks from transformative technologies and develop optimistic yet realistic visions of the future.

As AI Safety Investigator, you will:

  • Investigate corporate practices
    • Build a network of key current and former employees at the largest corporations to understand current policies and approaches.
    • Conduct desk research and survey major AI corporations.
    • As AI incidents occur, quickly prepare a summary of available public and private information on what went wrong and how such incidents could be prevented in future.
  • Lead the development of FLI\’sAI Safety Index.
    • Take ownership of one of FLI\’s flagship projects, conducting research against indicators of safety to score and rank AI corporations.
    • Find ways for the Index to improve, become more robust, relevant, accessible, and accurate.
  • Communicate insights to the public
    • Compress complex information into concise, structured formats suitable for index metrics.
    • Help create attention-grabbing yet informative data visualisations and written media that effectively communicate AI incidents.
    • Work with internal and external communication partners to find ways to amplify the key findings to larger or new audiences.

Required qualifications:

  • Self directed, a desire to work independently with minimal supervision.
  • Willingness to travel to the Bay Area, California on occasion to follow leads and build relationships.
  • Fluency in English, both written and oral

Skills and qualities required:

  • An instinct for identifying and following leads, and good judgement to understand the appropriate next steps to take.
  • High attention to detail and commitment to verifying the truth.
  • Ability to communicate technical concepts in clear and engaging ways.
  • Capacity to rapidly assess AI safety incidents and develop infographics or media briefings.

Preferred qualifications:

  • A background in journalism, research or conducting investigations.
  • Strong understanding of AI capabilities, technical safety research, and current risk management approaches (e.g. safety frameworks).
  • Existing network within the AI safety or AI development space.

Compensation: $90,000 – $150,000/year. Exact compensation will vary depending on experience and geography.

Additional benefits include: health insurance, 24+ days of PTO per year, paid parental leave, 401k matching in the US, and a work from home allowance for the purchase of office supplies or equipment. Exact benefits vary depending on location.

Application Deadline: Thursday 4th September 2025. Rolling applications may be considered after the application deadline if the position has not been filled.

Start Date: We\’d like the chosen candidate to start as soon as possible after accepting an offer.

Application Process: Apply by uploading your resume, alongside a short answer to the following question(s):

  • In 250 words or less, please outline your personal view on current AI safety practices at the major AI corporations. You can focus in on one corporation if you like, or give your view about the space more generally.
  • In 250 words or less, please outline the challenges you anticipate there might be in investigating AI corporations and the way(s) in which you will seek to overcome them.

Please apply via our website. Email applications are not accepted.

FLI aims to be an inclusive organization. We proactively seek job applications from candidates with diverse backgrounds. If you are passionate about FLI’s mission and think you have what it takes to be successful in this role even though you may not check all the boxes, please still apply. We would appreciate the opportunity to consider your application.

Questions may be directed to jobsadmin@futureoflife.org

About the Future of Life Institute

Founded in 2014, FLI is an independent non-profit working to steer transformative technology towards benefitting life and away from extreme large-scale risks. Our work includes grantmaking, educational outreach, and policy engagement.

Our work has been featured in The Washington Post, Politico, Vox, Forbes, The Guardian, the BBC, and Wired.

Some of our achievements include:

  • Pause Giant AI Experiments, an open letter calling for a 6 month pause on the training of AI systems more powerful than GPT-4. The letter has been signed by more than 30,000 people, including Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, and Andrew Yang.
  • The Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
  • Slaughterbots, a viral video campaign raising awareness about the dangers of lethal autonomous weapons.
  • The Future of Life Award, which retrospectively awards unsung heroes who made the world a better place. Past winners include individuals who prevented nuclear wars, helped to eradicate smallpox, and solved the ozone crisis.
  • Worldbuild.ai, which imagines flourishing futures with strong AI and works out how to get there.

FLI is a largely virtual organization, with a team of >25 distributed internationally, mostly in Europe and the US. We have four offices: Campbell in California, Brussels in Belgium, London in the UK, and Washington DC. We meet in person as a full team twice a year.

#J-18808-Ljbffr

AI Safety Investigator employer: THE FUTURE OF LIFE INSTITUTE (FLI)

The Future of Life Institute (FLI) is an exceptional employer, offering a unique opportunity to contribute to the critical field of AI safety while working alongside leading experts like Max Tegmark. With a strong commitment to employee growth, FLI provides generous benefits including health insurance, extensive PTO, and a supportive work culture that values diverse perspectives and encourages independent research. Located in a collaborative virtual environment with occasional travel to the Bay Area, California, FLI fosters innovation and impactful work that aims to steer transformative technology towards a safer future.
T

Contact Detail:

THE FUTURE OF LIFE INSTITUTE (FLI) Recruiting Team

StudySmarter Expert Advice 🀫

We think this is how you could land AI Safety Investigator

✨Tip Number 1

Network with professionals in the AI safety field. Attend industry conferences, webinars, or local meetups to connect with current and former employees of major AI corporations. Building these relationships can provide you with insider knowledge and potential leads for your investigations.

✨Tip Number 2

Stay updated on the latest AI safety incidents and trends. Follow relevant news outlets, blogs, and social media channels to keep your finger on the pulse of the industry. This will not only help you understand the current landscape but also prepare you for discussions during interviews.

✨Tip Number 3

Develop your skills in data visualisation and infographic creation. Since the role involves communicating complex information clearly, having a portfolio of visual content can demonstrate your ability to present findings effectively and engagingly.

✨Tip Number 4

Familiarise yourself with existing AI safety frameworks and risk management approaches. Understanding these concepts will not only enhance your credibility but also enable you to contribute meaningfully to discussions about improving safety practices within corporations.

We think you need these skills to ace AI Safety Investigator

Investigative Journalism
Research Skills
Analytical Thinking
Attention to Detail
Technical Communication
Data Visualisation
Networking Skills
Understanding of AI Safety Practices
Risk Management Knowledge
Ability to Summarise Complex Information
Self-Motivation
Judgement and Decision-Making
Adaptability
Public Speaking
Media Briefing Development

Some tips for your application 🫑

Understand the Role: Before applying, make sure you fully understand the responsibilities and qualifications required for the AI Safety Investigator position. Tailor your application to highlight how your skills and experiences align with these requirements.

Craft Your Short Answers: Pay special attention to the short answer questions in the application. Clearly articulate your views on current AI safety practices and the challenges of investigating AI corporations. Use specific examples where possible to demonstrate your understanding and insights.

Highlight Relevant Experience: In your resume, emphasise any relevant experience in journalism, research, or investigations. Showcase your ability to communicate complex information clearly and your understanding of AI safety and risk management approaches.

Proofread Your Application: Before submitting, thoroughly proofread your application materials. Check for grammatical errors, clarity, and coherence. A well-presented application reflects your attention to detail, which is crucial for this role.

How to prepare for a job interview at THE FUTURE OF LIFE INSTITUTE (FLI)

✨Research the Organisation

Before your interview, make sure to thoroughly research the Future of Life Institute. Understand their mission, recent projects, and key figures like Max Tegmark. This will help you align your answers with their values and demonstrate your genuine interest in the role.

✨Prepare Your Insights on AI Safety

Given the focus on AI safety practices, be ready to discuss your views on current trends and challenges in the industry. Prepare a concise summary of your thoughts on a specific corporation's safety practices, as this could come up during the interview.

✨Showcase Your Investigative Skills

Highlight any previous experience in journalism or research that showcases your ability to investigate and communicate complex information. Be prepared to discuss how you would approach investigating corporate AI safety practices and overcoming potential challenges.

✨Demonstrate Communication Skills

As the role requires clear communication of technical concepts, practice explaining complex ideas in simple terms. You might be asked to present your thoughts on an AI incident or safety framework, so clarity and engagement are key.

AI Safety Investigator
THE FUTURE OF LIFE INSTITUTE (FLI)
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

T
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>