Published multiple top conference papers, including ICLR Oral (2022), ICML Long Talk (2022), ICLR (2021). Specific papers include 'Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution' and 'Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation'.
Research Experience
Research Lead at OpenAI, contributing to projects such as o3, o1, and GPT-4.5.
Education
PhD student at Stanford University, advised by Percy Liang and Tengyu Ma. Focused on pretraining, fine-tuning, and out-of-distribution generalization.
Background
Research interests include generalization and exploration in reinforcement learning. A Research Lead at OpenAI, core contributor to o1 and gpt-4.5.
Miscellany
Co-advised several talented undergraduate and master's students at Stanford, some of whom have published insightful papers.