Beyond Single-Sample: Reliable Multi-Sample Distillation for Video Understanding

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of conventional black-box distillation in video understanding, which relies on single-sample teacher responses and suffers from high variance and format inconsistency, leading to unreliable supervision signals. To overcome this, the authors propose R-MSD, a novel framework that introduces, for the first time in video understanding, a multi-sample reliable distillation mechanism. R-MSD constructs a task-adaptive teacher ensemble to model teacher sampling variance and integrates quality-aware signal matching with an adversarial distillation objective, enabling dynamic adaptation of supervision signals for both closed-ended and open-ended reasoning tasks. The approach effectively suppresses noise and enhances generalization, outperforming single-sample distillation and SFT+RL baselines by 1.5%, 3.2%, and 3.6% on VideoMME, Video-MMMU, and MathVerse, respectively.

Technology Category

Application Category

📝 Abstract
Traditional black-box distillation for Large Vision-Language Models (LVLMs) typically relies on a single teacher response per input, which often yields high-variance responses and format inconsistencies in multimodal or temporal scenarios. To mitigate this unreliable supervision, we propose R-MSD (Reliable Multi-Sample Distillation), a framework that explicitly models teacher sampling variance to enhance distillation stability. Rather than relying on a single teacher response, our approach leverages a task-adaptive teacher pool to provide robust supervision tailored to both closed-ended and open-ended reasoning. By integrating quality-aware signal matching with an adversarial distillation objective, our approach effectively filters teacher noise while maximizing knowledge transfer. Extensive evaluations across comprehensive video understanding benchmarks demonstrate that R-MSD consistently outperforms single sample distillation methods. We additionally include an original SFT+RL 4B baseline under the same training budget, which shows only marginal gains, while our method achieves significant improvements. With a 4B student model, our approach delivers gains on VideoMME (+1.5%), Video-MMMU (+3.2%), and MathVerse (+3.6%).
Problem

Research questions and friction points this paper is trying to address.

knowledge distillation
video understanding
teacher variance
multimodal learning
black-box distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Sample Distillation
Teacher Sampling Variance
Quality-Aware Signal Matching
Adversarial Distillation
Video Understanding
🔎 Similar Papers
No similar papers found.
S
Songlin Li
University of Electronic Science and Technology of China & Robotics Center, XPeng Motors
X
Xin Zhu
Robotics Center, XPeng Motors
Z
Zechao Guan
Robotics Center, XPeng Motors
P
Peipeng Chen
Robotics Center, XPeng Motors
Jian Yao
Jian Yao
Wuhan University
Computer VisionAI3DRoboticsSLAM