Recent papers: 8 accepted to NeurIPS 2024, 5 accepted to ICLR 2024 (including two spotlight selections). Recent works include Large Language Model Unlearning and Trustworthy Large Language Model. Awards: Best paper runner-up at ICLR 2023 Workshop on Trustworthy and Reliable Large-Scale Machine Learning Models; Best paper award at ICML 2022 workshop on New Frontiers in Adversarial Machine Learning; Best paper award at AAMAS 2022 Workshop on Learning with Strategic Agents (LSA); NSF CAREER award in 2022.
Research Experience
Held a postdoctoral fellow position at Harvard University. Current research is supported by the National Science Foundation (through their CORE, FAI, CAREER, and TRIPOS programs), Amazon, UC Santa Cruz, and CROSS.
Education
Ph.D. from the University of Michigan, Ann Arbor; B.Sc. from Shanghai Jiao Tong University, China.
Background
Associate Professor of Computer Science and Engineering at UC Santa Cruz. Research interests include data-centric machine learning and trustworthy machine learning.