RubiCap: Rubric-Guided Reinforcement Learning for Dense Image Captioning

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in dense image captioning—namely, the high cost of expert annotations, limited diversity in existing synthetic approaches, and the lack of reliable reward mechanisms in reinforcement learning (RL). To overcome these limitations, the authors propose RubiCap, a novel framework that leverages large language models (LLMs) to generate fine-grained, sample-specific, multi-dimensional scoring rubrics for structured reward modeling in RL, replacing conventional coarse scalar rewards. Integrating LLM-based adjudication, committee sampling, and automated scoring, RubiCap achieves state-of-the-art performance on CapArena, outperforming supervised distillation, existing RL methods, human annotations, and GPT-4V–enhanced baselines. Furthermore, on CaptionQA, it attains higher lexical efficiency with a smaller model and produces captions that significantly enhance downstream vision-language model performance.

Technology Category

Application Category

📝 Abstract
Dense image captioning is critical for cross-modal alignment in vision-language pretraining and text-to-image generation, but scaling expert-quality annotations is prohibitively expensive. While synthetic captioning via strong vision-language models (VLMs) is a practical alternative, supervised distillation often yields limited output diversity and weak generalization. Reinforcement learning (RL) could overcome these limitations, but its successes have so far been concentrated in verifiable domains that rely on deterministic checkers -- a luxury not available in open-ended captioning. We address this bottleneck with RubiCap, a novel RL framework that derives fine-grained, sample-specific reward signals from LLM-written rubrics. RubiCap first assembles a diverse committee of candidate captions, then employs an LLM rubric writer to extract consensus strengths and diagnose deficiencies in the current policy. These insights are converted into explicit evaluation criteria, enabling an LLM judge to decompose holistic quality assessment and replace coarse scalar rewards with structured, multi-faceted evaluations. Across extensive benchmarks, RubiCap achieves the highest win rates on CapArena, outperforming supervised distillation, prior RL methods, human-expert annotations, and GPT-4V-augmented outputs. On CaptionQA, it demonstrates superior word efficiency: our 7B model matches Qwen2.5-VL-32B-Instruct, and our 3B model surpasses its 7B counterpart. Remarkably, using the compact RubiCap-3B as a captioner produces stronger pretrained VLMs than those trained on captions from proprietary models.
Problem

Research questions and friction points this paper is trying to address.

dense image captioning
reinforcement learning
reward signal
vision-language models
annotation scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Dense Image Captioning
LLM-based Rubrics
Structured Reward
Vision-Language Pretraining
🔎 Similar Papers
No similar papers found.