AI Emerging Risks Analyst
OpenAI
Location
San Francisco, Washington, DC
Employment Type
Full time
Location Type
Hybrid
Department
Intelligence & Investigations
Compensation
- $178.2K – $320K • Offers Equity
The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.
Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
401(k) retirement plan with employer match
Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
Mental health and wellness support
Employer-paid basic life and disability coverage
Annual learning and development stipend to fuel your professional growth
Daily meals in our offices, and meal delivery credits as eligible
Relocation support for eligible employees
Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.
More details about our benefits are available to candidates during the hiring process.
This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.
About the Team
The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem in close collaboration with our internal and external partners. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity.
The Strategic Intelligence & Analysis (SIA) team provides safety intelligence for OpenAI’s products by monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats. Our work informs safety mitigations, product decisions, and partnerships, ensuring OpenAI’s tools are deployed securely and responsibly across critical sectors.
About the Role
We are looking for an AI Emerging Risks Analyst to help us understand potential harms and misuse of AI in a time of rapid, sustained change. From known threat actors misusing new technologies to new threats enabled by new technologies, we seek to scan available signals and use strategic foresight methodologies to enable proactive detection and mitigation.
In this role, you will help to provide strategic-level perspective on a range of evolving risk areas, helping to produce actionable risk taxonomies relevant to OpenAI’s platforms, surfaces, and broader business interests. Utilizing mixed quantitative and qualitative methodologies, you will spot early warning signs, pull threads on potentially concerning behavior, and turn weak signals into clear, prioritized risk calls. You will focus on upstream ecosystem scanning, competitive benchmarking, and external narrative/risk sense-making. Your work will help to inform cross-functional partners in the protection and safety stacks to guide mitigations that keep users, brands, and communities safe while allowing productive, creative uses of these tools to thrive.
In this role, you will
-
Map and prioritize emerging risks
Build and continuously refine a clear picture of emerging signals and trends that could affect the AI ecosystem through upstream and external scanning.
Design and maintain harm taxonomies that provide foresight and warning about how AI harms and misuse may manifest over the next 0-24 months and beyond.
Contribute to an evergreen risk register and prioritization framework that surfaces the top issues by severity, prevalence, exposure, and trajectory.
-
Detect and deep dive into emerging abuse patterns
Create comprehensive approaches to horizon scanning, competitive benchmarking, and external narrative/risk sense-making.
Stay current on abuse trends ranging from state actor misuse to criminal activity, drawing from the work of internal organizational and cross-functional partners.
Connect individual incidents into system-level stories about actors, incentives, product design weaknesses, and cross-product spillover–whenever possible spotting these incidents or even hypothesizing them before they hit our surfaces.
-
Turn analysis into actionable risk intelligence
Translate findings into clear, ranked risk lists and concrete proposals for mitigations that product, safety, and policy teams can execute on.
Work with Global Affairs and Communications teams to share findings in ways that reinforce OpenAI’s role as a leader in the online safety ecosystem.
Track whether mitigation work is landing: follow key indicators, pressure-test assumptions, and push for course corrections when the data demands it.
-
Build early warning and measurement capabilities
Help define the core metrics and signals that indicate whether AI environments are safe (e.g., key harm prevalence, severity distributions, escalation rates, brand safety issues).
Work with data science and visualization colleagues to shape monitoring views and dashboards that highlight leading indicators and unusual changes from signals spotted off platform to determine whether these are manifesting in user behavior or abuse patterns.
Pioneer new uses of our own technologies to scale detection and transform workflows.
-
Provide strategic analysis and future-looking perspectives
Produce concise but comprehensive strategic intelligence estimates that provide full context about a given interest area that includes confidence levels based on observed data to inform judgments and recommendations.
Run scenario analyses that explore how AI harms might evolve over the next 6–24 months (e.g., how scam networks may use agentic AI; how state actors may seek to misuse new scientific capabilities of frontier models).
Help design and run tabletop exercises for internal and partner audiences that distill manifest and latent risks and identify mitigations.
Benchmark OpenAI’s risk profile and mitigations against external incidents and other platforms, highlighting gaps, strengths, and opportunities.
-
Shape safety readiness for new products
Contribute to product readiness and launch reviews by laying out expected abuse modes based on broad, upstream understanding.
Turn risk insights into practical guidance for internal teams (product, marketing, partnerships, comms) and, where appropriate, external partners using OpenAI technologies in social and brand contexts.
Develop reusable frameworks, playbooks, FAQs, and briefing materials that make it easier for the broader organization to understand AI risks and respond consistently.
You might thrive in this role if you
Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work focused on a range of emerging risks situated in strategic context and translated into actionable intelligence.
Demonstrated ability to analyze complex online harms (e.g., harassment, coordinated abuse, scams, influence operations, brand safety issues) and convert all-source analysis into concrete, prioritized recommendations.
Strong analytical skills and comfort working with both qualitative and quantitative inputs, including: (1) Casework, incident reports, OSINT, product context, and policy frameworks. (2) Basic metrics and trends in partnership with data science (e.g., harm prevalence, severity profiles, exposure, escalation rates).
Strong adversarial and product intuition, able to foresee how actors might adapt AI tools for misuse and evaluate how product mechanics, incentives, and UX decisions influence risk.
Experience designing and using risk frameworks and taxonomies (e.g., harm classification schemes, severity/likelihood matrices, prioritization models) to structure ambiguous spaces and support decision-making.
Understanding of the application of foresight methodologies including horizon scanning, scenario planning, tabletop exercises, or simulations.
Proven ability to work cross-functionally with product, engineering, data science, operations, legal, and policy teams, including pushing for clarity on tradeoffs and following through on mitigation work.
Excellent written and verbal communication skills, including experience producing concise, executive-ready briefs and explaining sensitive, complex issues in grounded, concrete terms.
Comfort operating in fast-changing, ambiguous environments: you can identify weak signals, form hypotheses, test them quickly, and adjust as the product and threat landscape evolves.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Compensation Range: $178.2K - $320K