- Paper 'SUV: Scalable Large Language Model Copyright Compliance with Regularized Selective Unlearning' accepted by COLM 2025
- Paper 'SHIELD: Evaluation and Defense Strategies for Copyright Compliance in LLM Text Generation' published at EMNLP 2024
- Survey 'Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity' published in ACM Computing Surveys (CSUR) 2025
- Paper 'Towards Federated RLHF with Aggregated Client Preference for LLMs' published at ICLR 2025
- Paper 'Evaluating the Factuality of Large Language Models using Large-Scale Knowledge Graphs' published in IEEE Data Engineering Bulletin 2024
- Paper 'SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales' published at EMNLP 2024
- Paper 'CausalEval: Towards Better Causal Reasoning in Language Models' published at NAACL 2025
Research Experience
- Applied Scientist Intern, AWS AI Labs – Fundamental Research Team, 2025
- Research Intern, AliCloud & Alibaba DAMO Academy, 2021-2023
- Intern, ByteDance, 2019-2020
Education
- PhD Student, Purdue University, 2023 - Present, Advisors: Prof. Jing Gao and Prof. Xiaoqian Wang
- Master's Degree, Zhejiang University, 2020 - 2023, Graduated with Outstanding Graduate Award and Outstanding Master’s Thesis of Zhejiang Province (Top 2%)
- Bachelor's Degree, Northeastern University, 2016 - 2020, Graduated with Outstanding Graduate Award
PhD student at Purdue University, with research interests in copyright compliance for large language models, factuality assessment, and causal reasoning. Advised by Prof. Jing Gao and Prof. Xiaoqian Wang.