hero

Portfolio Careers

companies
Jobs

Research, Post-Training Data

Cognition

Cognition

San Francisco, CA, USA
Posted on Apr 9, 2026

Location

San Francisco Bay Area

Employment Type

Full time

Location Type

On-site

Department

Research & Development

Who We Are

We are an applied AI lab building end-to-end software agents. We're the team behind Devin, the first AI software engineer, and Windsurf, an AI-native IDE. These products represent our vision for AI that doesn't just assist engineers, but works alongside them as a genuine teammate.

Our team is small and talent-dense: world-class competitive programmers, former founders, and researchers from the frontier of AI, including Scale AI, Palantir, Cursor, Google DeepMind, and others.

Role Mission

Post-training data research sits at the core of our roadmap and at the intersection of human insight and machine learning. This is the critical bridge between raw model intelligence and a system that is actually useful, safe, and collaborative for humans.

Our work combines human and synthetic data techniques to capture the nuances of human behavior and use them to steer models. We research the mechanisms that create value for people to explain, predict, and optimize for human preferences, behaviors, and satisfaction. We also explore new paradigms for human-AI interaction and scalable oversight. This role blends fundamental research and practical engineering; we don't distinguish between the two.

What You'll Accomplish

  • Data Strategy: Design and execute data collection and synthesis strategies for post-training by combining human feedback, preference data, and synthetic examples to guide model behavior.

  • Scalable Pipelines: Develop pipelines and frameworks for scalable, high-quality human labeling, model-assisted labeling, and synthetic data generation.

  • Human Preference Modeling: Research and model human preferences and behavior, creating data-driven methods to improve reasoning, truthfulness, and helpfulness.

  • Evaluation Design and Integrity: Iterate on evals through a continuous loop of defining evaluations, optimizing them, and identifying gaps. You'll be responsible for making numbers go up and making sure the numbers are meaningful.

  • Metrics and Benchmarks: Design and evaluate metrics that measure data quality, alignment, and the real-world impact of post-training interventions.

  • Scaling and Exploration: Scale existing methodologies and develop new ones when current approaches hit ceilings. We expect both rigor and invention.

  • Research Publication: Publish and present work that moves the community forward. Share code, datasets, and insights that accelerate progress across industry and academia.

Exceptional Candidates Have Demonstrated

  • Strong engineering skills with the ability to contribute code and debug in complex codebases

  • Experience with data curation, human feedback, or synthetic data generation for large language models or similar systems

  • Ability to design, run, and interpret experiments with scientific rigor and clarity

  • Proficiency in Python and at least one deep learning framework (PyTorch, TensorFlow, etc.); comfortable with distributed training and code that scales

  • Strong grasp of probability, statistics, and ML fundamentals; can distinguish real effects from noise and bugs

  • Prior experience with RLHF, RLAIF, preference modeling, or reward learning for large models

  • Experience managing or analyzing human data collection campaigns or large-scale annotation workflows

  • Research or engineering contributions in alignment, data-centric AI, or human-AI collaboration

  • Familiarity with synthetic data pipelines, active learning, or model-assisted labeling

  • We care more about demonstrated capability than credentials. A PhD is one signal among many.

Resources & Environment

  • Small, highly selective team where research and product move together; prototypes reach real deployment quickly

  • You'll have access to the data, tooling, and compute needed to run experiments and collection campaigns at frontier scale from day one

  • The environment rewards speed, autonomy, and technical depth with minimal process overhead; this is one of the most competitive and fast-moving problems in AI