Publications: Multiple papers accepted to ICML 2025, COLM 2025, ACM-BCB 2025, including 'DRIFT: Learning from Abundant User Dissatisfaction in Real-World Preference Learning' and 'LLMs Can Get “Brain Rot”!', which received widespread international media coverage from Nature, Wired, Forbes, etc.
Research Experience
Large Language Model Research Intern at Texas Instruments (May 2024 - Dec 2024), where he fine-tuned TI’s LLM for code generation, improving performance by 59% over the base model; implemented a multimodal RAG system to enhance LLM performance; built an AI agent system integrating code generation, self-debugging compiler, and RAG to streamline development workflows.
Education
Ph.D.: Purdue University, supervised by Prof. Ananth Grama; B.S.: University of Science and Technology of China, major in Electronic Information Engineering, minor in Artificial Intelligence.
Background
Research Interests: LLM post-training, particularly on synthetic data generation, reward modeling and self-improvement; LLM training and inference efficiency; Trustworthy ML. Background: A fourth-year Ph.D. student at Purdue University under the supervision of Prof. Ananth Grama, also working with Prof. Zhangyang Wang and Prof. Junyuan Hong. Prior to his Ph.D., he completed his undergraduate studies in Electronic Information Engineering with a minor in Artificial Intelligence at the University of Science and Technology of China.
Miscellany
Personal Projects: Development of the BiteEmo app, aimed at helping people manage their emotions through recording thoughts and feelings, leading to better health.