🤖 AI Summary
This work addresses the limitations of conventional black-box distillation in video understanding, which relies on single-sample teacher responses and suffers from high variance and format inconsistency, leading to unreliable supervision signals. To overcome this, the authors propose R-MSD, a novel framework that introduces, for the first time in video understanding, a multi-sample reliable distillation mechanism. R-MSD constructs a task-adaptive teacher ensemble to model teacher sampling variance and integrates quality-aware signal matching with an adversarial distillation objective, enabling dynamic adaptation of supervision signals for both closed-ended and open-ended reasoning tasks. The approach effectively suppresses noise and enhances generalization, outperforming single-sample distillation and SFT+RL baselines by 1.5%, 3.2%, and 3.6% on VideoMME, Video-MMMU, and MathVerse, respectively.
📝 Abstract
Traditional black-box distillation for Large Vision-Language Models (LVLMs) typically relies on a single teacher response per input, which often yields high-variance responses and format inconsistencies in multimodal or temporal scenarios. To mitigate this unreliable supervision, we propose R-MSD (Reliable Multi-Sample Distillation), a framework that explicitly models teacher sampling variance to enhance distillation stability. Rather than relying on a single teacher response, our approach leverages a task-adaptive teacher pool to provide robust supervision tailored to both closed-ended and open-ended reasoning. By integrating quality-aware signal matching with an adversarial distillation objective, our approach effectively filters teacher noise while maximizing knowledge transfer. Extensive evaluations across comprehensive video understanding benchmarks demonstrate that R-MSD consistently outperforms single sample distillation methods. We additionally include an original SFT+RL 4B baseline under the same training budget, which shows only marginal gains, while our method achieves significant improvements. With a 4B student model, our approach delivers gains on VideoMME (+1.5%), Video-MMMU (+3.2%), and MathVerse (+3.6%).