Product Policy Manager, Content & Enforcement Policy
About the Team
The Applied AI Team’s purpose is to commercialize OpenAI’s technology in a manner that leads to broadly beneficial Artificial General Intelligence (AGI) - in particular, by gaining practical experience in deploying these technologies safely. Within Applied AI, Trust and Safety’s mission is to define acceptable uses of OpenAI’s technology; establish detection and response systems to ensure that technology is not used in unsafe ways and to advance the set of use-cases that OpenAI can safely permit.
In 2020, we introduced GPT-3 as the first technology on the OpenAI API, allowing developers to integrate its ability to understand and generate natural language into their product. In 2021, we launched Copilot, powered by Codex, in partnership with GitHub, a new product that can translate natural language to code. In April 2022, we introduced DALL-E 2, AI that creates images from text, and in November 2022 we introduced ChatGPT.
About the Role
Providing access to powerful AI models introduces a host of challenging questions when it comes to content policy development and enforcement: How do we define content policies for what we do and don’t allow to be generated? How do we do this in such a way that is actionable, objective and sustains replicability? Should all categories have the same enforcement?
As an early member of the team, you’ll help shape policy creation and development at OpenAI and make an impact by helping ensure that our groundbreaking technologies are truly used to benefit all people. The ideal candidate can identify and develop cohesive and thoughtful policies with a sense of urgency. They can balance internal and external input in making complex decisions, carefully think through trade-offs, and write principled, enforceable policies based on our values. Product Policy has two subteams, one specializing in product advisory and risk assessment, while this role entails content policy development and enforcement policy.
This role is based in our San Francisco HQ. We offer relocation assistance to new employees.
In this role, you will:
- Develop specific content policies to capture a wide range of issues:
- Develop taxonomies to train content classifiers to create content policies from first principles that uphold OpenAI's values.
- Prioritize what topics to tackle, in what order, at what level of depth, in collaboration with our product, security and policy research teams
- Develop a broad range of subject matter expertise while maintaining agility across topics
- Iterate policies based on feedback on enforceability and edge cases
- Partner with internal and external researchers to adapt our taxonomies based on latest research and best practices
- Experiment and design policy enforcement using our latest advancements in AI research:
- Design strike systems to assess bad content at scale and hold bad actors accountable in a principled way
- Experiment creatively with new types of enforcement levers and explore solutions with research and engineering
- Align internal stakeholders around frameworks and OpenAI’s overall approach to policy enforcement, assessing trade-offs between adoption and acceleration
You might thrive in this role if you:
- Have studied or have an interest in philosophy, linguistics, and moral reasoning, and/or enjoy classification problems.
- Have experience defining, refining and enforcing content policies, especially at leading technology- companies or AI/ML labs.
- Can understand the operational challenges of enforcing product policies, including in the content moderation space, and can incorporate this to policy design.
- Can analyze the benefits and risks of open-ended problem spaces, working both from first-principles and from industry best practices.
- Are familiar with policy and safety/responsibility questions related to AI and ML specifically
Note that this role involves grappling with questions of sensitive uses of OpenAI’s technology, including at times erotic, violent, or otherwise-disturbing material. At times, this role will involve engaging with such content, as may be necessary to inform our policy approaches.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Compensation, Benefits and Perks
The annual salary range for this role is $135,000 – $180,000. Total compensation also includes generous equity and benefits.
- Medical, dental, and vision insurance for you and your family
- Mental health and wellness support
- 401(k) plan with 4% matching
- Unlimited time off and 18+ company holidays per year
- Paid parental leave (20 weeks) and family-planning support
- Annual learning & development stipend ($1,500 per year)
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via [email protected].