At a Glance
- Tasks: Review and enforce child safety policies on OpenAI products, ensuring a safe environment for users.
- Company: Join OpenAI, a leading AI research company dedicated to making technology safe for everyone.
- Benefits: Enjoy a hybrid work model, relocation assistance, and a supportive team culture.
- Why this job: Make a real difference in child safety while working with cutting-edge technology.
- Qualifications: Experience in Trust & Safety, strong analytical skills, and understanding of legal reporting requirements.
- Other info: Dynamic role with opportunities for growth and collaboration across various teams.
The predicted salary is between 36000 - 60000 £ per year.
About the Team
The Child Safety team is responsible for detection, review, and enforcement of OpenAI’s Child Safety product policies. We leverage a balanced use of technology and subject matter expertise to scale our operations. We collaborate with internal legal, research, and policy teams, external experts, and other industry stakeholders to keep OpenAI aligned with evolving regulations and industry best practices around child safety.
About the Role
A Child Safety Enforcement Specialist is an investigations and enforcement decision maker who owns the execution of platform child safety issues. This person will perform critical content reviews in their area, possibly work with vendors to train them on and provide deep insights into policies, maintain quality processes for workflows and automated content moderation, actively work to expand their knowledge of vulnerabilities and mitigation techniques, and work cross-functionally to improve policies, tooling and processes.
You’ll be responsible for
This role will directly support our child safety-related policies and processes. At times this may include running deep analysis on child safety related content and user behavior on OpenAI products. Candidates should understand this role requires significant content review.
Location and work model
This role is based in the United Kingdom, London office. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will
- Conduct detailed reviews of user-generated content for violations of child safety policies, including CSAM and grooming behaviors.
- Investigate high-risk user behavior escalations and determine appropriate enforcement actions in line with policy and legal standards.
- Identify content requiring mandatory reporting and draft high-quality reports for the National Center for Missing & Exploited Children (NCMEC).
- Use internal tools to triage, review, and escalate content efficiently; provide feedback to improve workflows and performance in our detection technologies.
- Partner with legal, policy, product, and engineering teams to provide frontline insights that inform safety product improvements and enforcement policies.
You might thrive in this role if you have / are
- T&S experience in the child safety space, preferably at a large tech company with considerable volume of child safety-related content and content review
- Worked in an operations capacity and has an understanding of typical Trust & Safety topics and concepts
- Clearly understands legal obligations and reporting flows involved with submitting Cybertips to the National Center for Missing and Exploited Children (NCMEC)
- Candidates ideally have demonstrated an ability to operate in an ambiguous and rapidly changing environment.
- This role is not expected to do any engineering, but software/stats/ML experience and SQL skills are great additions.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
#J-18808-Ljbffr
Child Safety Enforcement Specialist employer: The Rundown AI, Inc.
Contact Detail:
The Rundown AI, Inc. Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Child Safety Enforcement Specialist
✨Tip Number 1
Network like a pro! Reach out to folks in the child safety and tech space on LinkedIn or at industry events. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Prepare for interviews by diving deep into OpenAI’s child safety policies. Show us you know your stuff and can discuss real-world scenarios. We love candidates who can think critically about our mission!
✨Tip Number 3
Practice makes perfect! Do mock interviews with friends or use online platforms. The more comfortable you are talking about your experience, the better you'll shine when it counts.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who take that extra step!
We think you need these skills to ace Child Safety Enforcement Specialist
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your experience in child safety and content review. We want to see how your skills align with the role, so don’t hold back on showcasing relevant examples!
Show Your Passion: Let us know why you’re excited about working in child safety! Share any personal experiences or motivations that drive your interest in this field. We love seeing genuine enthusiasm for what we do.
Be Clear and Concise: When writing your application, keep it straightforward. Use clear language and avoid jargon unless it’s relevant. We appreciate a well-structured application that gets straight to the point!
Apply Through Our Website: Don’t forget to submit your application through our official website! It’s the best way to ensure your application gets seen by the right people. Plus, it makes the process smoother for everyone involved.
How to prepare for a job interview at The Rundown AI, Inc.
✨Know Your Policies Inside Out
Make sure you’re well-versed in OpenAI’s child safety policies and relevant legal obligations. Familiarise yourself with terms like CSAM and grooming behaviours, as these will likely come up during your interview.
✨Showcase Your Analytical Skills
Prepare to discuss your experience with content review and analysis. Be ready to provide examples of how you've handled high-risk user behaviour or similar situations in the past, highlighting your decision-making process.
✨Demonstrate Cross-Functional Collaboration
This role involves working with various teams, so be prepared to talk about your experience collaborating with legal, policy, or engineering teams. Share specific instances where your insights led to improvements in processes or policies.
✨Stay Updated on Industry Trends
Child safety is an ever-evolving field. Research current trends and challenges in child safety enforcement, and be ready to discuss how you would apply this knowledge to improve OpenAI's practices.