Uncertainty-Aware Knowledge Distillation for Multimodal Large Language Models

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of adaptively balancing data supervision and teacher guidance in knowledge distillation for multimodal large language models, particularly when facing noisy samples or uncertain teacher predictions. To this end, the authors propose Beta-weighted Knowledge Distillation (Beta-KD), which, from a Bayesian perspective, models teacher guidance as a Gibbs prior over student activations, enabling adaptive modulation of reliance on the teacher. Beta-KD introduces, for the first time, an uncertainty-aware adaptive weighting mechanism that supports arbitrary distillation objectives and their combinations, while also providing a closed-form solution for computational efficiency. Extensive experiments on multiple multimodal visual question answering (VQA) benchmarks demonstrate that Beta-KD significantly outperforms existing distillation approaches and effectively enhances student model performance.

Technology Category

Application Category

📝 Abstract
Knowledge distillation establishes a learning paradigm that leverages both data supervision and teacher guidance. However, determining the optimal balance between learning from data and learning from the teacher is challenging, as some samples may be noisy while others are subject to teacher uncertainty. This motivates the need for adaptively balancing data and teacher supervision. We propose Beta-weighted Knowledge Distillation (Beta-KD), an uncertainty-aware distillation framework that adaptively modulates how much the student relies on teacher guidance. Specifically, we formulate teacher--student learning from a unified Bayesian perspective and interpret teacher supervision as a Gibbs prior over student activations. This yields a closed-form, uncertainty-aware weighting mechanism and supports arbitrary distillation objectives and their combinations. Extensive experiments on multimodal VQA benchmarks demonstrate that distilling student Vision-Language Models from a large teacher VLM consistently improves performance. The results show that Beta-KD outperforms existing knowledge distillation methods. The code is available at https://github.com/Jingchensun/beta-kd.
Problem

Research questions and friction points this paper is trying to address.

knowledge distillation
multimodal large language models
teacher uncertainty
data supervision
uncertainty-aware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-Aware
Knowledge Distillation
Bayesian Perspective
Adaptive Weighting
Multimodal Large Language Models
🔎 Similar Papers
No similar papers found.
J
Jingchen Sun
NEC Laboratories America, Inc., USA; University at Buffalo, SUNY
Shaobo Han
Shaobo Han
NEC Labs America, Duke University
Machine LearningArtificial IntelligenceBayesian StatisticsSignal Processing
Deep Patel
Deep Patel
Student, Indian Institute of Information Technology Raichur
Computer ScienceNatural Language ProcessingComputer Vision
W
Wataru Kohno
NEC Laboratories America, Inc., USA
C
Can Jin
Rutgers University
C
Changyou Chen
University at Buffalo, SUNY