🤖 AI Summary
To address the pervasive multimodal hallucinations and challenges in factual evaluation within Vision-Language Models (VLMs), this paper proposes HKD4VLM, a progressive hybrid knowledge distillation framework. Methodologically, it introduces a novel two-stage architecture: (i) pyramid-style progressive online distillation and (ii) ternary-coupled refinement distillation, integrated with mapping-offset-enhanced reasoning and diversity-driven multi-source data augmentation. Additionally, it incorporates multi-granularity alignment and a ternary loss design to jointly optimize semantic fidelity, visual grounding, and logical consistency. HKD4VLM achieves state-of-the-art (SOTA) performance on both tracks of the Responsibility AI Challenge. Ablation studies confirm the efficacy of each component. Notably, the lightweight student model significantly outperforms fine-tuned large-model baselines, attaining superior trade-offs between accuracy and inference efficiency.
📝 Abstract
Driven by the rapid progress in vision-language models (VLMs), the responsible behavior of large-scale multimodal models has become a prominent research area, particularly focusing on hallucination detection and factuality checking. In this paper, we present the solution for the two tracks of Responsible AI challenge. Inspirations from the general domain demonstrate that a smaller distilled VLM can often outperform a larger VLM that is directly tuned on downstream tasks, while achieving higher efficiency. We thus jointly tackle two tasks from the perspective of knowledge distillation and propose a progressive hybrid knowledge distillation framework termed HKD4VLM. Specifically, the overall framework can be decomposed into Pyramid-like Progressive Online Distillation and Ternary-Coupled Refinement Distillation, hierarchically moving from coarse-grained knowledge alignment to fine-grained refinement. Besides, we further introduce the mapping shift-enhanced inference and diverse augmentation strategies to enhance model performance and robustness. Extensive experimental results demonstrate the effectiveness of our HKD4VLM. Ablation studies provide insights into the critical design choices driving performance gains.