COSMIC: Clique-Oriented Semantic Multi-space Integration for Robust CLIP Test-Time Adaptation

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak out-of-distribution (OOD) adaptability of vision-language models (VLMs), unreliable cached feature-label pairs, and oversimplified utilization of category information in queries, this paper proposes a multi-granularity cross-modal semantic caching and graph-structured querying framework. Our method introduces: (1) a Dual Semantics Graph that jointly encodes coarse-grained CLIP text/image semantics and fine-grained DINOv2 visual features; and (2) a Clique-Guided Hyper-class mechanism that enables collaborative prediction via class-cluster relationships. The framework integrates graph neural networks, cross-modal alignment, cache-augmented learning, and clique-detection-driven hyper-class construction. On OOD recognition, our approach achieves a 15.81% absolute improvement over the state-of-the-art; on cross-domain generation, it improves by 5.33% (using CLIP RN-50). These results demonstrate significantly enhanced zero-shot generalization robustness under distribution shift.

Technology Category

Application Category

📝 Abstract
Recent vision-language models (VLMs) face significant challenges in test-time adaptation to novel domains. While cache-based methods show promise by leveraging historical information, they struggle with both caching unreliable feature-label pairs and indiscriminately using single-class information during querying, significantly compromising adaptation accuracy. To address these limitations, we propose COSMIC (Clique-Oriented Semantic Multi-space Integration for CLIP), a robust test-time adaptation framework that enhances adaptability through multi-granular, cross-modal semantic caching and graph-based querying mechanisms. Our framework introduces two key innovations: Dual Semantics Graph (DSG) and Clique Guided Hyper-class (CGH). The Dual Semantics Graph constructs complementary semantic spaces by incorporating textual features, coarse-grained CLIP features, and fine-grained DINOv2 features to capture rich semantic relationships. Building upon these dual graphs, the Clique Guided Hyper-class component leverages structured class relationships to enhance prediction robustness through correlated class selection. Extensive experiments demonstrate COSMIC's superior performance across multiple benchmarks, achieving significant improvements over state-of-the-art methods: 15.81% gain on out-of-distribution tasks and 5.33% on cross-domain generation with CLIP RN-50. Code is available at github.com/hf618/COSMIC.
Problem

Research questions and friction points this paper is trying to address.

Improves test-time adaptation for vision-language models in novel domains
Addresses unreliable feature-label pairs and single-class information limitations
Enhances adaptability via multi-granular semantic caching and graph-based querying
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-granular cross-modal semantic caching
Dual Semantics Graph for complementary spaces
Clique Guided Hyper-class for robust prediction
🔎 Similar Papers
No similar papers found.