🤖 AI Summary
In non-object-centric vision domains—such as medical imaging and remote sensing—text annotations are typically itemized: multiple independent, semantically disjoint phrases describe distinct image regions. To address this, we propose ItemizedCLIP, the first framework jointly optimizing textual item independence and representational completeness. Methodologically, we introduce a text-item-conditioned cross-attention mechanism to achieve semantic decoupling and region-specific alignment; further, we design a multi-objective joint loss unifying item independence constraints, representational completeness constraints, and cross-modal contrastive alignment. Evaluated on brain MRI, cranial/thoracic CT, remote sensing, and synthetic datasets, ItemizedCLIP significantly improves zero-shot classification performance. It generates semantically anchored, item-discriminative, fully covered, and visually interpretable representations—establishing a new paradigm for fine-grained, explainable visual representation learning.
📝 Abstract
Training vision models with language supervision enables general and transferable representations. However, many visual domains, especially non-object-centric domains such as medical imaging and remote sensing, contain itemized text annotations: multiple text items describing distinct and semantically independent findings within a single image. Such supervision differs from standard multi-caption supervision, where captions are redundant or highly overlapping. Here, we introduce ItemizedCLIP, a framework for learning complete and explainable visual representations from itemized text supervision. ItemizedCLIP employs a cross-attention module to produce text item-conditioned visual embeddings and a set of tailored objectives that jointly enforce item independence (distinct regions for distinct items) and representation completeness (coverage of all items). Across four domains with naturally itemized text supervision (brain MRI, head CT, chest CT, remote sensing) and one additional synthetically itemized dataset, ItemizedCLIP achieves substantial improvements in zero-shot performance and fine-grained interpretability over baselines. The resulting ItemizedCLIP representations are semantically grounded, item-differentiable, complete, and visually interpretable. Our code is available at https://github.com/MLNeurosurg/ItemizedCLIP.