Kevin Qinghong Lin
Scholar

Kevin Qinghong Lin

Google Scholar ID: EvbGjlUAAAAJ
University of Oxford; National U. of Singapore
Vision and LanguageVideo UnderstandingAI Agent
Citations & Impact
All-time
Citations
1,938
 
H-index
16
 
i10-index
23
 
Publications
20
 
Co-authors
10
list available
Resume (English only)
Academic Achievements
  • Published numerous papers at top-tier conferences including NeurIPS, ICML, CVPR, ICLR, ECCV, and ACM MM.
  • 2025: Paper2Poster and Think or Not accepted by NeurIPS 2025; Paper2Poster selected as Oral at ICML 2025 Multi-Agent Systems workshop; UI-Vision accepted by ICML 2025; Show-o accepted by ICLR 2025; ShowUI, VLog, RoICtrl, MovieBench accepted by CVPR 2025; GUI-Narrator accepted by ACM MM 2025; selected for CVPR 2025 Doctoral Consortium; served as Area Chair for NeurIPS 2025.
  • 2024: ShowUI received Outstanding Paper Award (Oral) at NeurIPS 2024 Open-World Agents workshop; VideoGUI (Spotlight) and VideoLLM-MoD accepted by NeurIPS 2024; AssistGPT awarded Best Demo Paper at HCMA@ACM MM 2024; MovieSeq accepted by ECCV 2024; EgoVLP received Egocentric Vision (EgoVis) Distinguished Paper Award; recognized as CVPR 2024 Outstanding Reviewer; VideoLLM-online and SparseFormer accepted by CVPR 2024; recognized as NeurIPS 2024 Top Reviewer.
  • 2023: VisorGPT accepted by NeurIPS 2023; EgoVLP received PREMIA Best Student Paper Award (Gold); UniVTG, EgoVLPv2, TL;DR accepted by ICCV 2023; All-in-one and Afformer accepted by CVPR 2023.
  • 2022: EgoVLP (Spotlight) accepted by NeurIPS 2022; EgoVLP won Double Champions at Joint 1st Ego4D and 10th EPIC Workshop, CVPR 2022.
  • Key projects include: Show-o, ShowUI, UI-Vision, Paper2Poster, Paper2Video, Code2Video, VideoMind, EgoVLP series, UniVTG, Think or Not, etc.