🤖 AI Summary
Existing zero-shot image captioning methods rely on global image representations, limiting their ability to generate fine-grained descriptions for arbitrary regions—such as individual objects, non-contiguous regions, or the entire image. This work introduces the first unified patch-based zero-shot dense captioning framework, treating image patches as fundamental semantic units. Leveraging pretrained vision models (e.g., DINO), it extracts dense visual features aligned with textual semantics and aggregates local patch-level representations to enable scalable, region-level caption generation—without any region-level annotation supervision. The framework supports flexible description of arbitrarily shaped regions, including disjoint ones, and formally defines and addresses the novel “trace captioning” task—generating sequential descriptions tracing user-specified spatial paths. Extensive experiments demonstrate state-of-the-art performance across three zero-shot dense captioning benchmarks: dense region captioning, region-set captioning, and trace captioning.
📝 Abstract
Zero-shot captioners are recently proposed models that utilize common-space vision-language representations to caption images without relying on paired image-text data. To caption an image, they proceed by textually decoding a text-aligned image feature, but they limit their scope to global representations and whole-image captions. We present frameworkName{}, a unified framework for zero-shot captioning that shifts from an image-centric to a patch-centric paradigm, enabling the captioning of arbitrary regions without the need of region-level supervision. Instead of relying on global image representations, we treat individual patches as atomic captioning units and aggregate them to describe arbitrary regions, from single patches to non-contiguous areas and entire images. We analyze the key ingredients that enable current latent captioners to work in our novel proposed framework. Experiments demonstrate that backbones producing meaningful, dense visual features, such as DINO, are key to achieving state-of-the-art performance in multiple region-based captioning tasks. Compared to other baselines and state-of-the-art competitors, our models achieve better performance on zero-shot dense, region-set, and a newly introduced trace captioning task, highlighting the effectiveness of patch-wise semantic representations for scalable caption generation. Project page at https://paciosoft.com/Patch-ioner/ .