Ligeng Zhu
Scholar

Ligeng Zhu

Google Scholar ID: y0LVrtgAAAAJ
Nvidia
Machine LearningEfficient Deep Learning
Citations & Impact
All-time
Citations
8,384
 
H-index
25
 
i10-index
27
 
Publications
20
 
Co-authors
23
list available
Resume (English only)
Academic Achievements
  • - PockEngine: Sparse and Efficient Fine-tuning in a Pocket (MICRO-56, 2023)
  • - On-Device Training Under 256KB Memory (NeurIPS, 2022)
  • - Enable deep learning on mobile devices: Methods, systems, and applications (TODAES, 2022)
  • - Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning (NeurIPS, 2021)
  • - IOS: Inter-Operator Scheduler for CNN Acceleration (MLSys, 2021)
  • - TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning (NeurIPS, 2020)
  • - DataMix: Efficient Privacy-Preserving Edge-Cloud Inference (ECCV, 2020)
  • - HAT: Hardware-Aware Transformers for Efficient Neural Machine Translation (ACL, 2020)
  • - Distributed Training across the World (NeurIPS Workshop on Systems for ML, 2019)
  • - Deep Leakage from Gradients (NeurIPS, 2019)
Research Experience
  • - Conducting research on efficient designs for edge computing at MIT
Education
  • - Ph.D. student at MIT, advisor: Prof. Song Han
  • - Dual Degree Program between Zhejiang University and Simon Fraser University
Background
  • - Research interests: efficient designs for edge computing
  • - During undergraduate, worked with Prof. Brian Funt on colour vision and with Prof. Ping Tan on attribute recognition
Miscellany
  • - Previously lived in Hangzhou and Vancouver
  • - Open to potential collaborations