Zhongang Cai (蔡中昂)
Scholar

Zhongang Cai (蔡中昂)

Google Scholar ID: WrDKqIAAAAAJ
SenseTime Research
Computer VisionMultimodalSpatial IntelligenceEmbodied AIVirtual Humans
Citations & Impact
All-time
Citations
4,084
 
H-index
29
 
i10-index
40
 
Publications
20
 
Co-authors
21
list available
Publications
20 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Publications:
  • - Digital Life Project 2 (DLP3D) accepted to SIGGRAPH Asia 2025 (Real-Time Live!)
  • - SMPLest-X accepted to TPAMI 2025
  • - PoseFuse3D-KI accepted to NeurIPS 2025
  • - DPoser-X (Oral) accepted to ICCV 2025
  • - ADHMR accepted to ICML 2025
  • - SOLAMI, Disco4D, EgoLife accepted to CVPR 2025
  • - MeshAnything accepted to ICLR 2025
  • - GTA-Human accepted to TPAMI 2024
  • - WHAC, Large Motion Model accepted to ECCV 2024
  • - HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling accepted to ECCV 2022 (Oral)
  • - SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation accepted to NeurIPS 2023 (Datasets and Benchmarks Track)
Research Experience
  • Currently a Staff Research Scientist at SenseTime Research, working with Dr. Lei Yang. Published in multiple top-tier international conferences and involved in several significant projects.
Education
  • Ph.D. from MMLab@NTU, advised by Prof. Ziwei Liu and Prof. Chen Change Loy, where he spent wonderful years exploring virtual humans.
Background
  • Research interests: Multimodal foundation models with an emphasis on spatial intelligence; leads the open-source project DLP3D, which focuses on building real-time autonomous 3D characters.
Miscellany
  • Personal interests: Not explicitly mentioned