Scholar
Chengyu Wang
Google Scholar ID: _AVfRnQAAAAJ
Alibaba Group
Natural Language Processing
Large Language Model
Multi-modal Learning
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
2,031
H-index
24
i10-index
60
Publications
20
Co-authors
32
list available
Contact
GitHub
Open ↗
Publications
31 items
Mock Worlds, Real Skills: Building Small Agentic Language Models with Synthetic Tasks, Simulated Environments, and Rubric-Based Rewards
2026
Cited
0
VTC-R1: Vision-Text Compression for Efficient Long-Context Reasoning
2026
Cited
0
An Information-Theoretic Framework for Robust Large Language Model Editing
2025
Cited
0
M$^3$Prune: Hierarchical Communication Graph Pruning for Efficient Multi-Modal Multi-Agent Retrieval-Augmented Generation
2025
Cited
0
Thinking with DistilQwen: A Tale of Four Distilled Reasoning and Reward Model Series
2025
Cited
0
SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models
2025
Cited
0
Shared Neural Space: Unified Precomputed Feature Encoding for Multi-Task and Cross Domain Vision
2025
Cited
0
From Correction to Mastery: Reinforced Distillation of Large Language Model Agents
2025
Cited
0
Load more
Resume (English only)
Academic Achievements
Sept 2025: Three papers on knowledge distillation accepted to EMNLP 2025
Sept 2025: Paper 'UniEdit' (an LLM editing benchmark) accepted to NeurIPS 2025
June 2025: Paper on gradient leakage attacks accepted to USENIX Security 2025
May 2025: Two papers on multi-agent QA and knowledge distillation accepted to ACL 2025
Apr 2025: Three papers on diffusion models and multi-modal language models accepted to IJCAI 2025
Feb 2025: Four papers on diffusion models and multi-modal language models accepted to CVPR 2025
Feb 2025: Paper on trustworthiness of generative models accepted to IJCV
Jan 2025: Paper on text-to-image synthesis evaluation accepted to ICLR 2025
Dec 2024: Paper 'VisEdit' (knowledge editing) accepted to AAAI 2025
Nov 2024: Paper on data augmentation for LLMs accepted to COLING 2025
Sept 2024: Three papers 'VideoCLIP-XL' (multi-modal learning), 'RECIPE' (knowledge editing), and 'TAPIR' (knowledge distillation) accepted to EMNLP 2024
June 2025: Released EasyDistill, a knowledge distillation toolkit for LLMs
Co-authors
32 total
Minghui Qiu
Alibaba Group
Jun Huang
阿里巴巴资深算法专家
Ming Gao
School of Data Science and Engineering, East China Normal University
Cen Chen
East China Normal University
Taolin Zhang
Hefei University of Technology
Weining Qian
Professor of Computer Science, School of Data Science and Engineering, East China Normal University
Songfang Huang
Peking University, Alibaba DAMO, IBM Research, The University of Edinburgh
Kui Jia (贾奎)
The Chinese University of Hong Kong, Shenzhen
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up