Multimodal Dataset Distillation via Phased Teacher Models

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing knowledge distillation approaches, which struggle to capture the complex and dynamically evolving knowledge of teacher models in later training stages, thereby constraining student performance and suffering from suboptimal distilled data quality. To overcome these challenges, the authors propose a staged teacher modeling framework coupled with a shortcut trajectory construction strategy. By leveraging a stage-aware mechanism, the method precisely captures the temporal evolution of teacher knowledge across different training phases, effectively mitigating optimization instability and inter-stage knowledge gaps. The approach significantly enhances the stability, representational capacity, and efficiency of multimodal image–text distillation, achieving state-of-the-art results on Flickr30k and COCO benchmarks—with up to a 13.5% improvement on Flickr30k (averaging 9.53%)—while simultaneously reducing storage overhead.

Technology Category

Application Category

📝 Abstract
Multimodal dataset distillation aims to construct compact synthetic datasets that enable efficient compression and knowledge transfer from large-scale image-text data. However, existing approaches often fail to capture the complex, dynamically evolving knowledge embedded in the later training stages of teacher models. This limitation leads to degraded student performance and compromises the quality of the distilled data. To address critical challenges such as pronounced cross-stage performance gaps and unstable teacher trajectories, we propose Phased Teacher Model with Shortcut Trajectory (PTM-ST) -- a novel phased distillation framework. PTM-ST leverages stage-aware teacher modeling and a shortcut-based trajectory construction strategy to accurately fit the teacher's learning dynamics across distinct training phases. This enhances both the stability and expressiveness of the distillation process. Through theoretical analysis and comprehensive experiments, we show that PTM-ST significantly mitigates optimization oscillations and inter-phase knowledge gaps, while also reducing storage overhead. Our method consistently surpasses state-of-the-art baselines on Flickr30k and COCO, achieving up to 13.5% absolute improvement and an average gain of 9.53% on Flickr30k. Code: https://github.com/Previsior/PTM-ST.
Problem

Research questions and friction points this paper is trying to address.

multimodal dataset distillation
teacher model dynamics
cross-stage performance gap
unstable teacher trajectories
knowledge transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal dataset distillation
phased teacher model
shortcut trajectory
knowledge transfer
stage-aware modeling
🔎 Similar Papers
No similar papers found.
S
Shengbin Guo
Harbin Institute of Technology, Shenzhen
H
Hang Zhao
Harbin Institute of Technology, Shenzhen
Senqiao Yang
Senqiao Yang
The Chinese University of Hong Kong
C
Chenyang Jiang
Harbin Institute of Technology, Shenzhen
Y
Yuhang Cheng
Harbin Institute of Technology, Shenzhen
X
Xiangru Peng
Harbin Institute of Technology, Shenzhen
Rui Shao
Rui Shao
Professor, Harbin Institute of Technology (Shenzhen)
Computer VisionMultimodal LLMEmbodied AI
Zhuotao Tian
Zhuotao Tian
Professor, Harbin Institute of Technology (Shenzhen)
Vision-language ModelMulti-modal PerceptionComputer Vision