Zhengfeng Lai
Scholar

Zhengfeng Lai

Google Scholar ID: s_Ws1uYAAAAJ
Apple AI/ML
Vision Foundation ModelsMultimodal LLMAI Health
Citations & Impact
All-time
Citations
916
 
H-index
15
 
i10-index
19
 
Publications
20
 
Co-authors
14
list available
Resume (English only)
Academic Achievements
  • Publications:
  • - StreamBridge accepted by NeurIPS 2025
  • - SlowFast-LLaVA-1.5 accepted by COLM 2025
  • - STIV and ETVA accepted by ICCV 2025
  • - CLOC: Contrastive Localized Language-Image Pre-Training accepted by ICML 2025
  • - Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models accepted by ICLR 2025
  • - VeCLIP and PathCLIP accepted by ECCV 2024
  • - Semi-Path: An Interactive Semi-supervised Learning Framework for Gigapixel Pathology Image Analysis accepted by Smart Health and will present at IEEE/ACM CHASE 2024
  • - PADCLIP: Pseudo-labeling with Adaptive Debiasing in CLIP for Unsupervised Domain Adaptation accepted by ICCV 2023
  • - Smoothed Adaptive Weighting for Imbalanced Semi-Supervised Learning: Improve Reliability Against Unknown Distribution accepted by ICML 2022
  • Awards:
  • - 2024 College of Engineering (COE) Excellence in Graduate Student Research Award
  • - 2024 ECE Best PhD Disseration Award
  • - Best Paper Award from Workshop on Learning with Limited Labelled Data for Image and Video Understanding at CVPR 2022
  • - Received Ph.D. from UC Davis and joined Apple AI/ML (Cupertino, CA) as a ML Researcher in Nov 2023
  • - Received Advancement-to-Candidacy (AC) fellowship from ECE at UC Davis
  • - Best PhD Dissertation Award
Research Experience
  • ML Researcher at Apple AI/ML; Applied Scientist Intern at Amazon Lab126.
Education
  • Received Ph.D. from the University of California, Davis in 2023, advised by Prof. Chen-Nee Chuah, Prof. Sen-Ching Cheung, and Prof. Brittany N. Dugger; Bachelor's degree from Zhejiang University in 2019.
Background
  • ML Researcher at Apple AI/ML, contributing to Apple Intelligence and Vision Foundation Models. Research interests include Large Language Model, Multimodal Pre-Training, Video Foundation Model, label/data-efficient learning, and AI healthcare.
Miscellany
  • In free time, likes to play tennis, hiking, and outdoor adventures. Also likes to talk to people from various backgrounds and learn different life styles.