PixCLIP: Achieving Fine-grained Visual Language Understanding via Any-granularity Pixel-Text Alignment Learning

📅 2025-11-06
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
To address CLIP’s limitations in fine-grained vision–language alignment—particularly for long textual descriptions and localized visual understanding—this paper proposes a three-branch pixel–text alignment framework. Methodologically: (1) it replaces CLIP’s native text encoder with a large language model (LLM) to enhance long-text representation learning; (2) it introduces visual prompting and pixel-level alignment supervision to enable cross-modal fine-grained interaction; and (3) it establishes an automated pipeline for generating long-text annotations and releases LongGRIT, a large-scale dataset supporting such annotations. Experiments demonstrate state-of-the-art performance on fine-grained referring expression comprehension and grounding benchmarks—including RefCOCO, RefCOCO+, RefCOCOg, and PhraseCut—significantly improving joint modeling of local visual regions and complex semantics. Notably, the framework achieves the first interpretable, pixel-level vision–language alignment at arbitrary granularity.

Technology Category

Application Category

📝 Abstract
While the Contrastive Language-Image Pretraining(CLIP) model has achieved remarkable success in a variety of downstream vison language understanding tasks, enhancing its capability for fine-grained image-text alignment remains an active research focus. To this end, most existing works adopt the strategy of explicitly increasing the granularity of visual information processing, e.g., incorporating visual prompts to guide the model focus on specific local regions within the image. Meanwhile, researches on Multimodal Large Language Models(MLLMs) have demonstrated that training with long and detailed textual descriptions can effectively improve the model's fine-grained vision-language alignment. However, the inherent token length limitation of CLIP's text encoder fundamentally limits CLIP to process more granular textual information embedded in long text sequences. To synergistically leverage the advantages of enhancing both visual and textual content processing granularity, we propose PixCLIP, a novel framework designed to concurrently accommodate visual prompt inputs and process lengthy textual descriptions. Specifically, we first establish an automated annotation pipeline capable of generating pixel-level localized, long-form textual descriptions for images. Utilizing this pipeline, we construct LongGRIT, a high-quality dataset comprising nearly 1.5 million samples. Secondly, we replace CLIP's original text encoder with the LLM and propose a three-branch pixel-text alignment learning framework, facilitating fine-grained alignment between image regions and corresponding textual descriptions at arbitrary granularity. Experiments demonstrate that PixCLIP showcases breakthroughs in pixel-level interaction and handling long-form texts, achieving state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Enhancing fine-grained image-text alignment in CLIP models
Overcoming text encoder limitations for processing detailed descriptions
Synergizing visual prompts with lengthy textual inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline generates pixel-level long text annotations
Replaces CLIP text encoder with LLM for lengthy descriptions
Three-branch framework enables any-granularity pixel-text alignment
🔎 Similar Papers
No similar papers found.