Zongyu Lin
Scholar

Zongyu Lin

Google Scholar ID: 4ahRAd4AAAAJ
UCLA
Large Foundation ModelPretrainingReasoning
Citations & Impact
All-time
Citations
1,178
 
H-index
14
 
i10-index
17
 
Publications
20
 
Co-authors
12
list available
Resume (English only)
Academic Achievements
  • 2025: Papers 'DreamGen' and 'FLARE' accepted by CoRL 2025
  • 2025: Released KIMI K2, a state-of-the-art open-sourced agentic model
  • 2025: Paper 'STIV: Scalable Text and Image Conditioned Video Generation' accepted by ICCV 2025
  • 2025: Papers 'QLASS' and 'SparseCL' accepted by ICML 2025 (co-first authors)
  • 2025: Core contributor to NVIDIA's GROOT-N1 project
  • 2025: Preprint on efficient inference-time scaling and language agents using process reward modeling
  • 2025: Released KIMI K1.5 tech report, one of the earliest works on scaling long-context RL for LLMs
  • 2024: Contributed to Apple’s preprint on transparent video generation recipes towards Sora
  • 2025: Two papers accepted by ICLR 2025
  • 2024: Paper 'VideoPhy: Evaluating Physical Commonsense In Video Generation' accepted as Oral at NeurIPS 2024 Vision-Language Workshop
  • 2024: Released preprint 'SPARSECL: Sparse Contrastive Learning for Contradiction Retrieval'
Research Experience
  • Research Intern at NVIDIA (2025): Core contributor to GROOT-N1, the first open foundation model for generalist humanoid robots
  • Research Intern at Apple (2024): Main contributor to STIV, a large video generation model outperforming PIKA, GEN-3, and KLING on VBench
  • Founding Research Scientist at Moonshot AI (2023): Led pre-training of long-context LLMs, major contributor to KIMI CHAT
  • Quant Researcher Intern at Ubiquant (2022), a top hedge fund in China
  • Research Intern at SenseTime, China (2021)