🤖 AI Summary
During visual-language model (VLM) compression, multilingual performance degradation intensifies and exhibits pronounced imbalance. This work systematically investigates knowledge distillation (KD) as an adaptation mechanism for multilingual VLM compression. Leveraging CLIP and SigLIP architectures, we design and comparatively evaluate five KD strategies through controlled experiments on in-domain cross-lingual image–text retrieval and out-of-domain multilingual visual question answering (VQA). We first reveal a sensitive trade-off across KD configurations between cross-lingual representation consistency and cross-task stability. Notably, certain strategies—e.g., intermediate-layer feature distillation with language-aware weighting—maintain or even improve multilingual retrieval performance (average +1.2% mAP) under 50% parameter reduction, yet induce substantial instability in multilingual VQA (±4.8% fluctuation). Our study establishes a reproducible methodology and empirical benchmark for efficient, robust multilingual VLM compression.
📝 Abstract
Vision-language models (VLMs) exhibit uneven performance across languages, a problem that is often exacerbated when the model size is reduced. While Knowledge distillation (KD) demonstrates promising results in transferring knowledge from larger to smaller VLMs, applying KD in multilingualism is an underexplored area. This paper presents a controlled empirical study of KD behavior across five distillation approaches, isolating their effects on cross-lingual representation consistency and downstream performance stability under model compression. We study five distillation formulations across CLIP and SigLIP2, and evaluate them on in-domain retrieval and out-of-domain visual QA. We find that some configurations preserve or even improve multilingual retrieval robustness despite halving model size, but others fail to maintain cross-task stability, exposing design-sensitive trade-offs that aggregate accuracy alone does not reveal.