OpenAI has recently announced the establishment of a new team dedicated to addressing concerns regarding child safety. This move comes amid growing concerns from activists and parents regarding the potential for AI tools to be misused or abused by minors.
"OpenAI's Child Safety team helps to ensure that OpenAI's technologies are not misused or abused in ways harmful to underage populations," the ChatGPT maker wrote in a job listing.
"Through close partnerships with our Legal, Platform Policy, and Investigations colleagues, this team manages processes, incidents, and reviews to protect our online ecosystem. The team also handles key external engagements and relationships," it added.
Child Safety Team of OpenAI
The unveiling of the Child Safety team was made public through a job listing on OpenAI's career page. According to the listing, this specialized team collaborates closely with various internal groups, such as platform policy, legal, and investigations departments, as well as external partners, to oversee processes, incidents, and reviews related to underage users.
The primary objective of the Child Safety team is to ensure that OpenAI's technologies are not exploited in ways that could be detrimental to underage populations.
The team operates by implementing and scaling review processes for sensitive content and providing expert-level guidance on policy compliance within the context of AI-generated content.
The role of a team member within this newly formed division involves tasks such as reviewing content that breaches established policies, refining review, and response procedures and addressing escalated issues through investigations and subsequent actions.
Collaboration with engineering, policy, and research teams is also emphasized to enhance tooling, policies, and understanding of abusive content.
More Details About the Job Listing
Candidates deemed ideal for this role are anticipated to exhibit a pragmatic approach to carrying out operational duties, a genuine enthusiasm for AI technology, and a history of involvement in trust and safety or related domains.
Additionally, they must possess proficiency in data analysis, and a familiarity with scripting languages, particularly Python, is regarded as advantageous.
Regarding the interview process, applications will be accepted until February 13, with interviews and the onboarding process slated to occur by mid-March.
Successful candidates can expect compensation within a competitive salary range of $136,000 to $220,000 annually. Furthermore, they will enjoy supplementary benefits such as equity, medical insurance, mental health assistance, parental leave, and stipends for professional development.