L-CLIPScore: a Lightweight Embedding-based Captioning Metric for Evaluating and Training

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost of image caption quality evaluation and model training, this paper proposes a lightweight evaluation metric, L-CLIPScore, and its associated efficient dual-encoder model, L-CLIP. Methodologically: (1) CLIP parameters are compressed via weight reuse and matrix decomposition; (2) a multimodal Similarity Regulation (SR) loss is introduced to strengthen alignment between matched image-text pairs and suppress similarity for unmatched pairs during knowledge distillation, thereby enhancing cross-modal alignment; (3) n-gram–based metrics are integrated to form a joint supervision signal. Experiments show that L-CLIPScore achieves strong correlation with human judgments (Pearson’s r > 0.85) and runs 5.3× faster than CLIPScore. When used as training supervision, it significantly improves caption generation quality and consistently outperforms baselines. This work presents the first unified, high-fidelity, low-overhead solution for CLIP-level image-text evaluation and end-to-end training.

Technology Category

Application Category

📝 Abstract
We propose a novel embedding-based captioning metric termed as L-CLIPScore that can be used for efficiently evaluating caption quality and training captioning model. L-CLIPScore is calculated from a lightweight CLIP (L-CLIP), which is a dual-encoder architecture compressed and distilled from CLIP. To compress, we apply two powerful techniques which are weight multiplexing and matrix decomposition for reducing the parameters of encoders and word embedding matrix, respectively. To distill, we design a novel multi-modal Similarity Regulator (SR) loss to transfer more vision-language alignment knowledge. Specifically, SR loss amplifies the multi-modal embedding similarity if the given image-text pair is matched and diminishes the similarity if the pair is non-matched. By compressing and distilling by this novel SR loss, our L-CLIP achieves comparable multi-modal alignment ability to the original CLIP while it requires fewer computation resources and running time. We carry out exhaustive experiments to validate the efficiency and effectiveness of L-CLIPScore when using it as the judge to evaluate caption quality. We also discover that when using L-CLIPScore as the supervisor to train the captioning model, it should be mixed up by an n-gram-based metric and meanwhile analyze why using L-CLIPScore only will cause fail training.
Problem

Research questions and friction points this paper is trying to address.

Proposes L-CLIPScore for efficient caption evaluation and training
Compresses CLIP via weight multiplexing and matrix decomposition
Uses Similarity Regulator loss to enhance vision-language alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight CLIP via compression and distillation
Weight multiplexing and matrix decomposition techniques
Multi-modal Similarity Regulator loss for alignment
🔎 Similar Papers
No similar papers found.
L
Li Li
Key Laboratory of New Generation Artificial Intelligence Technology & Its Interdisciplinary Applications, (Southeast University), Ministry of Education
Yingzhe Peng
Yingzhe Peng
Southeast University
LLMNLPMultimodal
X
Xu Yang
Key Laboratory of New Generation Artificial Intelligence Technology & Its Interdisciplinary Applications, (Southeast University), Ministry of Education
R
Ruoxi Cheng
Key Laboratory of New Generation Artificial Intelligence Technology & Its Interdisciplinary Applications, (Southeast University), Ministry of Education
H
Haiyang Xu
Alibaba Group
M
Ming Yan
Alibaba Group
F
Fei Huang
Alibaba Group