Zekun Wang
Scholar

Zekun Wang

Google Scholar ID: BrTJVdEAAAAJ
Harbin Institute of Technology
Large Language ModelEfficiencyAgent
Citations & Impact
All-time
Citations
4,236
 
H-index
10
 
i10-index
11
 
Publications
20
 
Co-authors
27
list available
Resume (English only)
Academic Achievements
  • Two papers accepted at NeurIPS 2025, with 'Gated Attention for Large Language Models' selected as an Oral presentation.
  • ICLR 2025 Spotlight paper 'AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials' (Top 5%).
  • Two papers accepted at ACL 2025: 'EffiVLM-Bench' and 'Demons in the Detail'.
  • COLING 2025 Oral paper 'SmartTrim: Adaptive Tokens and Attention Pruning for Efficient Vision-Language Models'.
  • Published 'CFSP: An Efficient Structured Pruning Framework for LLMs with Coarse-to-Fine Activation Information' at COLING 2025 as co-first and corresponding author.
  • NeurIPS 2024 Oral paper 'Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation'.
  • Published 'Distilled Dual-Encoder Model for Vision-Language Understanding' at EMNLP 2022.
  • Published 'Less Is More: Domain Adaptation with Lottery Ticket for Reading Comprehension' in Findings of EMNLP 2021.
  • Core contributor to technical reports and model development for Qwen3, Qwen3-Next, and Qwen2.5.