hero

Portfolio Careers

companies
Jobs

Research, Mid-Training

Cognition

Cognition

San Francisco, CA, USA
Posted on Apr 9, 2026

Location

San Francisco Bay Area

Employment Type

Full time

Location Type

On-site

Department

Research & Development

Who We Are

We are an applied AI lab building end-to-end software agents. We're the team behind Devin, the first AI software engineer, and Windsurf, an AI-native IDE. These products represent our vision for AI that doesn't just assist engineers, but works alongside them as a genuine teammate.

Our team is small and talent-dense: world-class competitive programmers, former founders, and researchers from the frontier of AI, including Scale AI, Palantir, Cursor, Google DeepMind, and others.

Role Mission

Mid-training sits at the seam between pre-training and post-training and is one of the highest-leverage points in the entire model pipeline. This is where raw base model capability is sharpened into something that can reason deeply, generalize reliably, and serve as the foundation that post-training builds on.

You will own the late-stage training decisions that determine what our models are fundamentally capable of: data mix and quality uplift, annealing schedules, context length extension, capability injection across coding, math, and reasoning, and the synthetic data strategies that make all of it scale. This role does cross-cutting work across what is classically considered both pre-training and post-training. We don't distinguish between research and engineering; we expect both.

What You'll Accomplish

  • Data Mix and Quality Uplift: Design and iterate on high-quality data mixtures for late-stage and annealing training runs. Develop principled methods for sourcing, filtering, and weighting data to sharpen model capabilities without degrading general performance.

  • Capability Injection: Drive targeted improvements in coding, mathematics, and long-horizon reasoning through curated data strategies and training interventions. Translate research insights into measurable capability gains on our agents.

  • Synthetic Data Research: Develop and evaluate synthetic data pipelines that generate training signal at scale. Understand the limits and failure modes of synthetic approaches and build methods that hold up in production training runs.

  • Annealing and Schedule Design: Research and optimize multi-stage learning rate schedules, warmup strategies, and compute allocation across training phases. Understand how schedule choices interact with data distribution and model behavior.

  • Context Length Extension: Research and implement methods for extending effective context length without degrading short-context performance. This includes positional encoding strategies, data construction, and targeted evaluation.

  • Evaluation and Iteration: Build evals that distinguish real capability improvements from benchmark overfitting. Close the loop between training decisions and what actually matters for Devin and our other systems in deployment.

  • Scaling and Methodology: Measure how mid-training interventions scale with compute and data. Develop new approaches when existing methods hit ceilings; we expect both rigorous empiricism and original thinking.

Exceptional Candidates Have Demonstrated

  • Deep familiarity with the LLM training pipeline end to end: pre-training data, optimization, architecture, and how mid-training and post-training interact

  • Hands-on experience with continual pre-training, annealing, or late-stage data mixing for large models

  • Strong intuition for data quality: what makes a dataset useful for training, how to filter and curate at scale, and how data mix choices compound across evals

  • Experience developing or evaluating synthetic data pipelines for capability improvement

  • Proficiency in Python and deep learning frameworks (PyTorch); comfortable debugging distributed training at scale

  • Strong fundamentals in optimization, statistics, and ML theory; able to distinguish real effects from noise, instability, and overfitting

  • A track record of original contributions: publications, open-source impact, or internal results that moved a capability frontier

  • Comfort operating in ambiguous, fast-moving environments where the problem definition is as important as the solution

  • We care more about demonstrated capability than credentials. A PhD is one signal among many.

Resources & Environment

  • Small, highly selective team where research and product move together; prototypes reach real deployment quickly

  • Compute is not a constraint: large allocations with training jobs routinely running across thousands of GPUs from day one

  • The environment rewards speed, autonomy, and technical depth with minimal process overhead; this is one of the most competitive and fast-moving problems in AI