Distilling Cross-Modal Knowledge via Feature Disentanglement

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of knowledge transfer in cross-modal knowledge distillation caused by modality representation inconsistency, this paper proposes a frequency-domain disentangled distillation framework. The method applies Fourier transform to decompose cross-modal features (e.g., vision–language) from teacher and student models into low-frequency components (semantically dominant) and high-frequency components (detail-dominant), then enforces strong alignment constraints on the former and weak alignment constraints on the latter. Additionally, a scale-consistency loss is introduced to mitigate distribution shift, and classifier sharing is adopted to enhance decision-level consistency. Evaluated on multiple vision–language benchmarks, the proposed approach significantly outperforms conventional and state-of-the-art cross-modal distillation methods, achieving both model compression and improved downstream task performance. To our knowledge, this is the first work to enable selective cross-modal knowledge transfer from a frequency-domain perspective.

Technology Category

Application Category

📝 Abstract
Knowledge distillation (KD) has proven highly effective for compressing large models and enhancing the performance of smaller ones. However, its effectiveness diminishes in cross-modal scenarios, such as vision-to-language distillation, where inconsistencies in representation across modalities lead to difficult knowledge transfer. To address this challenge, we propose frequency-decoupled cross-modal knowledge distillation, a method designed to decouple and balance knowledge transfer across modalities by leveraging frequency-domain features. We observed that low-frequency features exhibit high consistency across different modalities, whereas high-frequency features demonstrate extremely low cross-modal similarity. Accordingly, we apply distinct losses to these features: enforcing strong alignment in the low-frequency domain and introducing relaxed alignment for high-frequency features. We also propose a scale consistency loss to address distributional shifts between modalities, and employ a shared classifier to unify feature spaces. Extensive experiments across multiple benchmark datasets show our method substantially outperforms traditional KD and state-of-the-art cross-modal KD approaches. Code is available at https://github.com/Johumliu/FD-CMKD.
Problem

Research questions and friction points this paper is trying to address.

Addresses cross-modal knowledge transfer inconsistencies between vision and language
Proposes frequency-decoupled distillation using distinct alignment strategies for different features
Solves representation divergence through frequency-domain feature disentanglement and shared classifiers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frequency-decoupled cross-modal knowledge distillation method
Distinct losses for low and high-frequency feature alignment
Shared classifier and scale consistency loss unification
🔎 Similar Papers
No similar papers found.