Tianyi Tang
Scholar

Tianyi Tang

Google Scholar ID: t1mRUvQAAAAJ
Qwen Team, Alibaba Group & Renmin University of China
Artificial IntelligenceNatural Language Processing
Citations & Impact
All-time
Citations
12,545
 
H-index
18
 
i10-index
19
 
Publications
20
 
Co-authors
8
list available
Resume (English only)
Academic Achievements
  • Published several papers in top-tier conferences and journals such as ICLR, ACL, EMNLP, and CSUR. Some of the papers include:
  • - Qwen3 Technical Report
  • - Neuron-based Personality Trait Induction in Large Language Models (ICLR 2025)
  • - Qwen2.5 Technical Report
  • - Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models (ACL 2024)
  • - LLMBox: A Comprehensive Library for Large Language Models (ACL 2024 System Demonstrations)
  • - Not All Metrics Are Guilty: Improving NLG Evaluation by Diversifying References (NAACL 2024)
  • - Pre-trained Language Models for Text Generation: A Survey (ACM Computing Surveys)
  • - BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models (LREC-COLING 2024)
  • - A Survey of Large Language Models
  • - Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting (Findings of EMNLP 2023)
Research Experience
  • Serves as a researcher in the Alibaba Qwen team, participating in multiple large language model projects.
Education
  • Graduated from Renmin University of China, where he conducted research under the supervision of Professor Wayne Xin Zhao in the AI Box lab.
Background
  • Currently a member of the Qwen Team at Alibaba, deeply involved in the development of the Qwen series of large language models, including Qwen2.5, QwQ, and Qwen3. Primary research focus is on improving human alignment for large language models.
Miscellany
  • Can be reached at steventianyitang@outlook.com.