Hezhen Hu
Scholar

Hezhen Hu

Google Scholar ID: Fff-9WAAAAAJ
University of Texas at Austin
Sign Language RecognitionSign Language TranslationVideo Understanding
Citations & Impact
All-time
Citations
1,186
 
H-index
14
 
i10-index
19
 
Publications
20
 
Co-authors
19
list available
Resume (English only)
Academic Achievements
  • Published 'Expressive Gaussian Human Avatars from Monocular RGB Video' at NeurIPS 2024, introducing expressive animatable avatars learned from in-the-wild monocular video without SMPL-X annotations.
  • Published 'SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign Language Understanding' in TPAMI 2023—the first self-supervised pre-training framework for sign language understanding (extension of SignBERT).
  • Published 'Hand-Object Interaction Image Generation' at NeurIPS 2022, proposing a novel task of generating images depicting hand-object interactions.
  • Co-authored MMHU: A Massive-Scale Multimodal Benchmark for Human Behavior Understanding (preprint).
  • Published 'Uni-Sign: Toward Unified Sign Language Understanding at Scale' at ICLR 2025, presenting a unified large-scale framework for sign language understanding.
  • Published 'Prior-aware Cross Modality Augmentation Learning for Continuous Sign Language Recognition' in TMM 2023, introducing a novel cross-modality augmentation paradigm with prior knowledge.
  • Published 'Collaborative Multilingual Sign Language Recognition: A Unified Framework' in TMM 2022—the first work to explore multilingual continuous sign language recognition.
  • Published 'SignBERT' at ICCV 2021—the first self-supervised pre-training method for isolated sign language recognition using hand-model-aware masked modeling.
  • Published 'Model-Aware Gesture-to-Gesture Translation' at CVPR 2021, an early contribution to gesture translation.
  • Organizing the 3rd AI3DCC workshop at ICCV 2025 (scheduled for October 2025).