Bing Zhou
Scholar

Bing Zhou

Google Scholar ID: xc4mjhkAAAAJ
Snap Research
Human Motion GenerationVideo GenerationHuman Computer Interaction
Citations & Impact
All-time
Citations
698
 
H-index
15
 
i10-index
18
 
Publications
20
 
Co-authors
45
list available
Contact
Resume (English only)
Academic Achievements
  • One paper accepted to IEEE ICC'19; extended journal version of BatMapper accepted to IEEE Transactions on Mobile Computing (TMC); presented at MobiCom'18; EchoPrint paper accepted to MobiCom'18; Knitter journal paper accepted to IEEE Transactions on Mobile Computing (TMC); EasyFind project won First Prize at Entrepreneur Challenge 2018 with $10,000 cash award; EasyFind project finalist at Hackathon@CEWIT'18 (Top 6 teams); BatTracker paper accepted to SenSys'17; BatMapper demo presented at MobiCom'17; BatMapper paper accepted to MobiSys'17; received Google Research Award; Billiards Guru project finalist at Hackathon@CEWIT'17 (Top 6 teams); one paper accepted to IEEE ICC'17; Knitter paper accepted to INFOCOM'17.
Research Experience
  • Staff research engineer at Snap Research NYC; presented work on active visual recognition in augmented reality at IBM Research; involved in multiple projects like BatTracker, BatMapper, Knitter, and presented at several international conferences.
Education
  • Summer internship at IBM Thomas J. Watson Research Center; successfully defended Ph.D. dissertation in 2019.
Background
  • Research interests focus on 3D human animation generation from various signals such as text, music, audio, and video, as well as human-centered video generation including audio-conditioned lip-synced video generation and background video generation for animations. Also works on 3D avatar/animal reconstruction and animation, 3D/4D content creation. Additionally, explores human-centered sensing including sensor-based human pose tracking, hand gesture recognition from multimodal sensors, and data synthesizing using generative models. Passionate about bridging the gap between multimodal AI understanding and realistic human motion synthesis for applications in entertainment, AR/VR, and digital human creation.
Miscellany
  • Passionate about bridging the gap between multimodal AI understanding and realistic human motion synthesis, particularly for applications in entertainment, AR/VR, and digital human creation.