Published multiple papers including 'From System 1 to System 2: A Survey of Reasoning Large Language Models' and had several papers accepted by ACL2025, CVPR2025, and COLING2025.
Research Experience
Closely collaborating with Zhijiang Guo at HKUST, Duzhen Zhang at MBZUAI, Xiao Liang at UCLA, and Xiao Liu at MSRA.
Education
Pursuing a Ph.D. in Artificial Intelligence at the Institute of Automation, Chinese Academy of Sciences (CASIA), under the supervision of IEEE/CAAI/CAA/IAPR Fellow Chenglin Liu.
Background
Research interests include LLM reasoning in math, code, and neural-symbolic systems, brain-inspired System-2 intelligence, and continual learning in LLM reasoning. Currently exploring System-2 reasoning in (multimodal) LLMs, LLM agents, and LRM4Science, with applications in math, code, medicine, and beyond.
Miscellany
Contact information includes Email, Google Scholar, GitHub, Twitter, and LinkedIn.