Qing Yu
Scholar

Qing Yu

Google Scholar ID: As3ImtEAAAAJ
LY Corporation
Computer Vision
Citations & Impact
All-time
Citations
886
 
H-index
13
 
i10-index
17
 
Publications
20
 
Co-authors
2
list available
Resume (English only)
Academic Achievements
  • Published 'PINO: Person-Interaction Noise Optimization for Long-Duration and Customizable Motion Generation of Arbitrary-Sized Groups' at ICCV 2025 (co-first author).
  • Published 'Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models' at ACL 2025.
  • Published 'A Benchmark and Evaluation for Real-World Out-of-Distribution Detection Using Vision-Language Models' at ICIP 2025.
  • Published 'ReMoGPT: Part-Level Retrieval-Augmented Motion-Language Models' at AAAI 2025.
  • Published 'Chronologically Accurate Retrieval for Temporal Grounding of Motion-Language Models' at ECCV 2024.
  • Published 'Exploring Vision Transformers for 3D Human Motion-Language Models with Motion Patches' at CVPR 2024.
  • Published 'LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning' at NeurIPS 2023.
  • Published 'Frame-Level Label Refinement for Skeleton-Based Weakly-Supervised Action Recognition' at AAAI 2023.
  • Published 'Self-Labeling Framework for Novel Category Discovery over Domains' at AAAI 2022.
  • Published 'Divergence Optimization for Noisy Universal Domain Adaptation' at CVPR 2021.
  • Published 'Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning' at ECCV 2020.
  • Published 'Unknown Class Label Cleaning for Learning with Open-Set Noisy Labels' at ICIP 2020.
  • Additional publications at BMVC, WACV, ICIP, and other international conferences.
Background
  • Currently a Research Scientist at LY Corp. and a Project Researcher at The University of Tokyo, Japan.
  • Member of the Virtual Human Lab at LY Corp.
  • Former member of Aizawa Laboratory, working with Prof. Kiyoharu Aizawa.
  • Research interests include Computer Vision, specifically Motion Recognition, Motion Generation, Open-set Recognition, and Domain Adaptation.
  • Proficient in Python and PyTorch.