Xiaopeng Zhang (张晓鹏)
Scholar

Xiaopeng Zhang (张晓鹏)

Google Scholar ID: Ud6aBAcAAAAJ
Senior Researcher, Huawei Technologies Co., Ltd.
Vision Language ModelsData Engineering
Citations & Impact
All-time
Citations
16,606
 
H-index
35
 
i10-index
62
 
Publications
20
 
Co-authors
19
list available
Contact
No contact links provided.
Publications
20 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Selected Honors and Awards: SAIL Star of World Artificial Intelligence Conference 2021 (Pangu Large Pre-Trained Models), Top 1 in NuScenes Autonomous Driving 3D Detection Task 2020, Top 1 in WebVision Large Scale Classification Challenge 2020, Most Innovative Award in LVIS Long-tailed Challenge 2020, Outstanding Doctoral Thesis Award by China Society of Image and Graphics (CSIG) 2018, Best Student Paper Award at Visual Communications and Image Processing (VCIP) 2014. Projects: Sub-project Leader for 'New Generation Artificial Intelligence' - Machine learning technology under data security and privacy protection: Large Scale Learning System.
Research Experience
  • Senior Researcher and Assistant Scientist at Huawei; Lead of the PanGu vision team at Huawei Cloud since 2020, responsible for PanGu foundation model research; Previously led a research team at Noah's Ark Lab, focusing on data-efficient learning in autonomous driving.
Education
  • Ph.D. in Electronic Engineering from Shanghai Jiao Tong University, supervised by Prof. Hongkai Xiong and Prof. Qi Tian (graduated in 2017); Postdoctoral Fellow at the Department of Electrical and Computer Engineering, National University of Singapore, under the supervision of Jiashi Feng and Shuicheng Yan (2017-2019).
Background
  • Research interests: Vision and language foundation models, including foundation model pretraining, workflow development, data engineering, and multi-modal understanding. Previously focused on fine-grained recognition and weakly supervised learning during his PhD and postdoc period.
Miscellany
  • Recruiting highly motivated interns (PhD preferred) focusing on fundamental models, including but not limited to self-supervised learning, multi-modal learning, network optimization, and data engineering.