Visual Adaptive Prompting for Compositional Zero-Shot Learning

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In compositional zero-shot learning (CZSL), static text prompts fail to model visual context variation, leading to insufficient semantic–visual alignment. To address this, we propose the Visual-Adaptive Prompting System (VAPS), which introduces a learnable visual prompt library. VAPS dynamically generates context-aware text prompts for each input image via feature-similarity-based retrieval and a lightweight visual prompt adapter. Integrated into a vision–language model framework, VAPS jointly optimizes the visual–language embedding space to enhance cross-modal alignment. Evaluated on three standard CZSL benchmarks—including both closed-set and open-set settings—VAPS achieves state-of-the-art performance, significantly improving recognition accuracy and generalization robustness for unseen attribute–object compositions.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have demonstrated impressive capabilities in learning joint representations of visual and textual data, making them powerful tools for tasks such as Compositional Zero-Shot Learning (CZSL). CZSL requires models to generalize to novel combinations of visual primitives-such as attributes and objects-that were not explicitly encountered during training. Recent works in prompting for CZSL have focused on modifying inputs for the text encoder, often using static prompts that do not change across varying visual contexts. However, these approaches struggle to fully capture varying visual contexts, as they focus on text adaptation rather than leveraging visual features for compositional reasoning. To address this, we propose Visual Adaptive Prompting System (VAPS) that leverages a learnable visual prompt repository and similarity-based retrieval mechanism within the framework of VLMs to bridge the gap between semantic and visual features. Our method introduces a dynamic visual prompt repository mechanism that selects the most relevant attribute and object prompts based on the visual features of the image. Our proposed system includes a visual prompt adapter that encourages the model to learn a more generalizable embedding space. Experiments on three CZSL benchmarks, across both closed and open-world scenarios, demonstrate state-of-the-art results.
Problem

Research questions and friction points this paper is trying to address.

Improves Compositional Zero-Shot Learning accuracy
Adapts prompts to visual contexts dynamically
Bridges semantic and visual feature gaps
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic visual prompt repository
similarity-based retrieval mechanism
visual prompt adapter
🔎 Similar Papers
No similar papers found.
Kyle Stein
Kyle Stein
Ph.D. Candidate, University of West Florida
Deep LearningComputer VisionCybersecurity
A
A. Mahyari
Department of Intelligent Systems and Robotics, University of West Florida, Pensacola, FL, USA; Florida Institute For Human and Machine Cognition (IHMC), Pensacola, FL, USA
G
Guillermo A. Francia
Center for Cybersecurity, University of West Florida, Pensacola, FL, USA
E
Eman El-Sheikh
Center for Cybersecurity, University of West Florida, Pensacola, FL, USA