Multiple papers accepted by top conferences such as ACL, EMNLP; released a series of Qwen models including Qwen3, QwQ-32B, Qwen2.5-Math-PRM, etc.; released the ProcessBench benchmark; VECO 2.0 cross-lingual pre-trained model ranked first on the Google XTREME leaderboard.
Research Experience
Served as a researcher in the Qwen team, responsible for the development and research of the Qwen series models. Prior research also includes pre-training and fine-tuning language models on NLU tasks, as well as weakly supervised learning in machine learning.
Background
A researcher in the Qwen Team, Alibaba Group. Current interests focus on enhancing the intelligence of Large Language Models, particularly reasoning and agent capabilities. Contributed to the development and research of the Qwen series models, primarily involving post-training and agent.