Yu Pan
Scholar

Yu Pan

Google Scholar ID: NuxEyPAAAAAJ
Noah's Ark Lab, Huawei
Tensor LearningModel CompressionModel InitializationTraining Efficiency
Citations & Impact
All-time
Citations
635
 
H-index
11
 
i10-index
11
 
Publications
19
 
Co-authors
0
 
Publications
19 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Selected Publications:
  • - IDInit: A Universal and Stable Initialization Method for Neural Network Training, ICLR 2025
  • - Preparing Lessons for Progressive Training on Language Models, AAAI 2024 (Oral, Top 10%)
  • - Reusing Pretrained Models by Multi-linear Operators for Efficient Training, NeurIPS 2023
  • - Tensor Networks Meet Neural Networks: A Survey and Future Perspectives, Preprint 2023
  • - A Unified Weight Initialization Paradigm for Tensorial Convolutional Neural Networks, ICML 2022
  • - RegNet: Self-Regulated Network for Image Classification, TNNLS 2022
  • - TedNet: A Pytorch Toolkit for Tensor Decomposition Networks, Neurocomputing 2022
  • - Heuristic Rank Selection with Progressively Searching Tensor Ring Network, Complex & Intelligent Systems 2021
  • - Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition, AAAI 2019
Research Experience
  • Research Scientist at Huawei Noah’s Ark Lab.
Education
  • Ph.D. from the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen (HITSZ), supervised by Prof. Zenglin Xu.
Background
  • Interests: Tensor Learning, Model Compression, Model Initialization, Training Efficiency. Major: Investigating combinations of tensor decomposition technique and deep neural networks, focusing on model compression and efficient training.
Miscellany
  • Personal Interests: Not specified
Co-authors
0 total
Co-authors: 0 (list not available)