July 2025: Our work on extracting copyrighted content from open-weight models was featured by The Atlantic.
May 2025: Presented our Independence Tests for Language Models paper at ICML 2025, receiving an oral presentation (top 2.6% of submissions).
May 2025: Released Marin, the best-in-class open-source language model at the time of release, outperforming Llama 3.1 8B. I was in charge of instruction tuning and decontamination.
May 2025: Our AILuminate benchmark was covered by WIRED and Business Wire.
May 2024: Meta used our MLCommons AI Safety Benchmark v0.5 to evaluate their Llama 3 models for safety testing.
May 2023: Selected as a Knight-Hennessy Scholar (2023 cohort) — one of 84 scholars chosen from over 7,500 applications (1.1% acceptance rate).
Research Experience
Part of Stanford AI Lab, Stanford NLP, Stanford ML, and Stanford Trustworthy AI Research (STAIR). Also a part-time advisor to AlphaXiv and MLCommons.
Education
PhD in Computer Science at Stanford University, advised by Prof. Percy Liang and Prof. Sanmi Koyejo.
Background
PhD student in Computer Science at Stanford University, with research interests in building tools to understand, democratize, and safeguard the benefits from modern AI systems. Specific areas include AI safety, intellectual property & copyright, behavioral specification, and building fully open-source language models.
Miscellany
Passionate about addressing issues of diversity and inclusion in academia, working on improving outreach and inclusion in CS research through my role as a mentor in CURIS, the Stanford CS department’s REU program. Helped spearhead initiatives such as the CURIS fellows program, aimed to provide research opportunities for historically underrepresented students, and PURE, which provides research funding for First-Generation/Low-Income students.