SuperCap: Multi-resolution Superpixel-based Image Captioning

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the over-reliance on object detectors, limited generalizability, and coarse-grained semantic expression in image captioning, this paper proposes a detector-free multi-resolution superpixel vision-language modeling framework. Methodologically, it introduces a novel superpixel-guided multi-scale input mechanism, integrated with adaptive cross-scale attention fusion and pre-trained vision-language models (e.g., BLIP-2, Qwen-VL), enabling joint local structural perception and global open-vocabulary semantic understanding. The core contribution lies in replacing detection bounding boxes with superpixels as semantic units—eliminating explicit detector dependency while supporting hierarchical, fine-grained image parsing and natural language generation. On the COCO Karpathy test split, the framework achieves a CIDEr score of 136.9, substantially outperforming existing detector-free approaches. Ablation studies systematically validate the effectiveness of each component.

Technology Category

Application Category

📝 Abstract
It has been a longstanding goal within image captioning to move beyond a dependence on object detection. We investigate using superpixels coupled with Vision Language Models (VLMs) to bridge the gap between detector-based captioning architectures and those that solely pretrain on large datasets. Our novel superpixel approach ensures that the model receives object-like features whilst the use of VLMs provides our model with open set object understanding. Furthermore, we extend our architecture to make use of multi-resolution inputs, allowing our model to view images in different levels of detail, and use an attention mechanism to determine which parts are most relevant to the caption. We demonstrate our model's performance with multiple VLMs and through a range of ablations detailing the impact of different architectural choices. Our full model achieves a competitive CIDEr score of $136.9$ on the COCO Karpathy split.
Problem

Research questions and friction points this paper is trying to address.

Move beyond object detection in image captioning
Use superpixels and VLMs for open set object understanding
Incorporate multi-resolution inputs for detailed image analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Superpixels replace object detection in captioning
Multi-resolution inputs enhance image detail analysis
Attention mechanism focuses on relevant caption parts
🔎 Similar Papers
No similar papers found.
Henry Senior
Henry Senior
PhD Student
computer visiongraph neural networks
L
Luca Rossi
The Hong Kong Polytechnic University, Hong Kong
G
Gregory Slabaugh
Queen Mary University of London, London, UK
Shanxin Yuan
Shanxin Yuan
Lecturer, Queen Mary University of London
Low-level vision3D visiondigital humanneural rendering