Cross-modal Identity Mapping: Minimizing Information Loss in Modality Conversion via Reinforcement Learning

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of uncontrolled information loss in large vision-language models during image captioning, often caused by omission or misrepresentation of critical visual content. To mitigate this, the authors propose the Cross-modal Identity Mapping (CIM) framework, which—without requiring additional annotations—introduces a novel reinforcement learning objective grounded in image similarity from text–image retrieval as an unsupervised proxy for information loss. The framework enforces two consistency constraints: Gallery Representation Consistency and Query-Gallery Image Relevance. Evaluated on the COCO-LN500 benchmark, CIM enhances the relational reasoning capability of Qwen2.5-VL-7B by 20% and outperforms supervised fine-tuning approaches.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) often omit or misrepresent critical visual content in generated image captions. Minimizing such information loss will force LVLMs to focus on image details to generate precise descriptions. However, measuring information loss during modality conversion is inherently challenging due to the modal gap between visual content and text output. In this paper, we argue that the quality of an image caption is positively correlated with the similarity between images retrieved via text search using that caption. Based on this insight, we further propose Cross-modal Identity Mapping (CIM), a reinforcement learning framework that enhances image captioning without requiring additional annotations. Specifically, the method quantitatively evaluates the information loss from two perspectives: Gallery Representation Consistency and Query-gallery Image Relevance. Supervised under these metrics, LVLM minimizes information loss and aims to achieve identity mapping from images to captions. The experimental results demonstrate the superior performance of our method in image captioning, even when compared with Supervised Fine-Tuning. Particularly, on the COCO-LN500 benchmark, CIM achieves a 20% improvement in relation reasoning on Qwen2.5-VL-7B.The code will be released when the paper is accepted.
Problem

Research questions and friction points this paper is trying to address.

information loss
modality conversion
image captioning
vision-language models
cross-modal mapping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal Identity Mapping
Reinforcement Learning
Information Loss Minimization
Image Captioning
Vision-Language Models
🔎 Similar Papers
No similar papers found.