Concept-Guided Fine-Tuning: Steering ViTs away from Spurious Correlations to Improve Robustness

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Vision Transformers (ViTs) often exhibit reduced robustness under distribution shifts due to their reliance on spurious correlations, such as background cues. To address this, this work proposes an unsupervised fine-tuning framework that leverages large language models to generate class-relevant semantic concepts and integrates vision-language models to automatically construct fine-grained part-level masks. By designing an objective function that promotes correlation alignment with semantic concepts while suppressing background dependencies, the framework steers ViTs to attend to semantically critical regions. Evaluated across five out-of-distribution benchmarks, the method significantly enhances both robustness and interpretability of various ViT architectures. The automatically generated masks outperform those from conventional segmentation supervision, and the model’s internal attention maps demonstrate more accurate alignment with semantic parts.

Technology Category

Application Category

πŸ“ Abstract
Vision Transformers (ViTs) often degrade under distribution shifts because they rely on spurious correlations, such as background cues, rather than semantically meaningful features. Existing regularization methods, typically relying on simple foreground-background masks, which fail to capture the fine-grained semantic concepts that define an object (e.g., ``long beak''and ``wings''for a ``bird''). As a result, these methods provide limited robustness to distribution shifts. To address this limitation, we introduce a novel finetuning framework that steers model reasoning toward concept-level semantics. Our approach optimizes the model's internal relevance maps to align with spatially grounded concept masks. These masks are generated automatically, without manual annotation: class-relevant concepts are first proposed using an LLM-based, label-free method, and then segmented using a VLM. The finetuning objective aligns relevance with these concept regions while simultaneously suppressing focus on spurious background areas. Notably, this process requires only a minimal set of images and uses half of the dataset classes. Extensive experiments on five out-of-distribution benchmarks demonstrate that our method improves robustness across multiple ViT-based models. Furthermore, we show that the resulting relevance maps exhibit stronger alignment with semantic object parts, offering a scalable path toward more robust and interpretable vision models. Finally, we confirm that concept-guided masks provide more effective supervision for model robustness than conventional segmentation maps, supporting our central hypothesis.
Problem

Research questions and friction points this paper is trying to address.

spurious correlations
distribution shifts
Vision Transformers
semantic concepts
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept-Guided Fine-Tuning
Vision Transformers
Spurious Correlations
Semantic Concepts
Relevance Map Alignment
πŸ”Ž Similar Papers
No similar papers found.