Published multiple papers including 'Don't Throw Away Your Pretrained Model' (arxiv), 'Sparta Alignment: Collectively Aligning Multiple Language Models through Combat' (NeurIPS 2025), 'Heterogeneous Swarms: Jointly Optimizing Model Roles and Weights for Multi-LLM Systems' (NeurIPS 2025), etc. Received IBM PhD Fellowship, Jane Street Graduate Research Fellowship, Baidu PhD Fellowship.
Research Experience
Conducting research at the University of Washington, involved in multiple research projects.
Education
PhD student at the University of Washington, advised by Yulia Tsvetkov.
Background
Research Interests: Model collaboration, social NLP, networks and structures.