Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
Publications: 'From Introspection to Best Practices', 'LLM The Genius Paradox', 'MuirBench', 'Cognitive Overload', 'CEO', 'AutoDAN', etc.; Awards: Outstanding Paper Award for 'Look-back Decoding for Open-Ended Text Generation' at EMNLP 2023.
Research Experience
Microsoft Research: May 2024 - Aug 2024, Redmond, USA, Research Intern, Mentors: Dr. Sheng Zhang and Dr. Hoifung Poon; Tencent AI Lab: Jun 2022 - Aug 2022, Bellevue, USA, Research Intern, Mentors: Dr. Hongming Zhang and Dr. Jianshu Chen; Amazon: Jun 2020 - Aug 2020, Seattle, USA, Applied Science Intern, Mentors: Dr. Seyi Feyisetan and Dr. Abhinav Aggarwal; Fraunhofer Heinrich Hertz Institute: Jun 2016 - Sep 2017, Berlin, Germany, Software Engineer Intern.
Education
Ph.D. in Computer Science from the University of Southern California, advised by Prof. Muhao Chen and Prof. Xuezhe Ma; Master's degree in Computer Science from Shanghai Jiao Tong University (2019), advised by Prof. Yanmin Zhu and Prof. Yanyan Shen; Second Master's degree in Computer Science from Technical University of Berlin (2017), worked with Prof. Yan Liu and Prof. Zhenhui Li; Bachelor's degree in Computer Science from Shanghai Jiao Tong University (2015).
Background
Research Interests: Building trustworthy large language model systems, investigating inference-time algorithms to improve LLM factuality, safety, and alignment in defense of jailbreaks and adversarial attacks; designing multimodal LLMs, focusing on pre-training, fine-tuning, and in-context learning.