Scholar
Tongtian Yue
Google Scholar ID: OrICiVQAAAAJ
Institute of Automation, Chinese Academy of Sciences
Multimodal Pretrain
Visual-Language
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
171
H-index
6
i10-index
5
Publications
15
Co-authors
5
list available
Contact
No contact links provided.
Publications
8 items
AdaSpark: Adaptive Sparsity for Efficient Long-Video Understanding
2026
Cited
0
LaVi: Efficient Large Vision-Language Models via Internal Feature Modulation
2025
Cited
0
Prefix Grouper: Efficient GRPO Training through Shared-Prefix Forward
2025
Cited
0
Towards Unified Referring Expression Segmentation Across Omni-Level Visual Target Granularities
2025
Cited
0
Efficient Motion-Aware Video MLLM
2025
Cited
0
EEGPT: Unleashing the Potential of EEG Generalist Foundation Model by Autoregressive Pre-training
arXiv.org · 2024
Cited
8
OneDiff: A Generalist Model for Image Difference Captioning
Asian Conference on Computer Vision · 2024
Cited
2
Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMs
2024
Cited
0
Resume (English only)
Co-authors
5 total
Jing Liu 刘静
Professor in Institute of Automation of the Chinese Academy Sciences (CASIA)
Longteng Guo 郭龙腾
Associate Professor, Institute of Automation of the Chinese Academy Sciences (CASIA)
Zijia Zhao
Institute of Automation, Chinese Academy Sciences (CASIA)
Yepeng Tang
Beijing Jiaotong University
Co-author 5
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up