🤖 AI Summary
Existing lightweight image captioning models predominantly rely on textual prompts alone, neglecting the implicit visual semantics embedded in retrieved text—leading to suboptimal cross-modal alignment and constrained caption quality. To address this, we propose Text2Vision Prompting (T2VP), the first method to project retrieved text into the CLIP visual embedding space and subsequently inject learnable, randomly sampled Gaussian noise to synthesize vision-aware prompt tokens. This explicitly encodes text-derived visual priors. T2VP enhances text-guided visual feature representation and enables robust multimodal feature fusion. Evaluated on COCO, Flickr30k, and NoCaps, it consistently outperforms state-of-the-art lightweight captioning models—achieving superior accuracy and lexical richness while maintaining efficient inference (<150M parameters). The framework supports plug-and-play deployment without architectural modification.
📝 Abstract
Recent lightweight image captioning models using retrieved data mainly focus on text prompts. However, previous works only utilize the retrieved text as text prompts, and the visual information relies only on the CLIP visual embedding. Because of this issue, there is a limitation that the image descriptions inherent in the prompt are not sufficiently reflected in the visual embedding space. To tackle this issue, we propose ViPCap, a novel retrieval text-based visual prompt for lightweight image captioning. ViPCap leverages the retrieved text with image information as visual prompts to enhance the ability of the model to capture relevant visual information. By mapping text prompts into the CLIP space and generating multiple randomized Gaussian distributions, our method leverages sampling to explore randomly augmented distributions and effectively retrieves the semantic features that contain image information. These retrieved features are integrated into the image and designated as the visual prompt, leading to performance improvements on the datasets such as COCO, Flickr30k, and NoCaps. Experimental results demonstrate that ViPCap significantly outperforms prior lightweight captioning models in efficiency and effectiveness, demonstrating the potential for a plug-and-play solution.