Publications: 'Task Shift: From Classification to Regression in Overparameterized Linear Models' (AISTATS 2025), 'The Group Robustness is in the Details: Revisiting Finetuning under Spurious Correlations' (NeurIPS 2024), 'Towards Last-layer Retraining for Group Robustness with Fewer Annotations' (NeurIPS 2023). Awards: 'Task Shift: From Classification to Regression in Overparameterized Linear Models' won top-10 award at IMS Workshop on Frontiers in Statistical Machine Learning 2025.
Research Experience
Interned at Google and Microsoft Research, most recently working on the development of a 15B multimodal vision-language model with application to computer-use agents.
Education
PhD: H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, advised by Vidya Muthukumar and Jacob Abernethy; BS: Applied and Computational Mathematics, University of Southern California, advised by Shaddin Dughmi.
Background
PhD student in Machine Learning, interested in foundational aspects of generalization in machine learning, including the generalization theory of neural networks and feature learning, robustness under distribution shift, task shift, and spurious correlations, and the foundations and applications of large-scale multimodal vision-language models.
Miscellany
Was a Trustee Scholar and Viterbi Fellow at the University of Southern California.