Haokun Lin
Scholar

Haokun Lin

Google Scholar ID: 7DnpUlIAAAAJ
City University of Hong Kong & CASIA
Multi-modal LearningEfficient Deep Learning
Citations & Impact
All-time
Citations
660
 
H-index
7
 
i10-index
6
 
Publications
15
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • NeurIPS 2024 Oral paper: "DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs" (co-first author)
  • CVPR 2024 paper: "MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric"
  • ICLR 2024 paper: "Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models"
  • ACL 2024 Findings paper: "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact"
  • ECCV 2024 paper: "MATHVERSE: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?"
  • Accepted to ICCV 2025: "DOGR: Towards Versatile Visual Document Grounding and Referring"
  • IEEE TMM 2025 paper: "Scale Up Composed Image Retrieval Learning via Modification Text Generation"
  • ICLR 2025 paper: "Image-level Memorization Detection via Inversion-based Inference Perturbation"
  • Multiple preprints including "TokLIP", "LRQ-DiT", and "Quantization Meets dLLMs"
  • First Prize, 2024 Graduate Academic Forum at UCAS
  • Top Reviewer at NeurIPS 2024
Co-authors
0 total
Co-authors: 0 (list not available)