DCAC: Dynamic Class-Aware Cache Creates Stronger Out-of-Distribution Detectors

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks often produce overconfident predictions on out-of-distribution (OOD) samples during inference. To address this issue, this work proposes DCAC, a training-free, test-time calibration module that maintains a separate cache for each known class to dynamically collect high-entropy samples and performs two-stage lightweight calibration by fusing visual features with prediction probabilities. DCAC introduces, for the first time, a class-aware caching mechanism that effectively captures intra-class visual similarities between OOD samples and known categories. The method is compatible with both unimodal and vision-language models. Experimental results demonstrate that DCAC significantly enhances the performance of various OOD detection approaches; for instance, when combined with ASH-S on ImageNet, it reduces the FPR95 metric by 6.55%.

Technology Category

Application Category

📝 Abstract
Out-of-distribution (OOD) detection remains a fundamental challenge for deep neural networks, particularly due to overconfident predictions on unseen OOD samples during testing. We reveal a key insight: OOD samples predicted as the same class, or given high probabilities for it, are visually more similar to each other than to the true in-distribution (ID) samples. Motivated by this class-specific observation, we propose DCAC (Dynamic Class-Aware Cache), a training-free, test-time calibration module that maintains separate caches for each ID class to collect high-entropy samples and calibrate the raw predictions of input samples. DCAC leverages cached visual features and predicted probabilities through a lightweight two-layer module to mitigate overconfident predictions on OOD samples. This module can be seamlessly integrated with various existing OOD detection methods across both unimodal and vision-language models while introducing minimal computational overhead. Extensive experiments on multiple OOD benchmarks demonstrate that DCAC significantly enhances existing methods, achieving substantial improvements, i.e., reducing FPR95 by 6.55% when integrated with ASH-S on ImageNet OOD benchmark.
Problem

Research questions and friction points this paper is trying to address.

out-of-distribution detection
overconfident predictions
deep neural networks
OOD samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Class-Aware Cache
Out-of-Distribution Detection
Test-Time Calibration
Training-Free
Vision-Language Models
🔎 Similar Papers
No similar papers found.
Y
Yanqi Wu
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China, Peng Cheng Laboratory, Shenzhen, China, Key Laboratory of Machine Intelligence and Advanced Computing, MOE, Guangzhou, China
Q
Qichao Chen
University of Nottingham Malaysia, Semenyih, Malaysia
R
Runhe Lai
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China, Peng Cheng Laboratory, Shenzhen, China, Key Laboratory of Machine Intelligence and Advanced Computing, MOE, Guangzhou, China
X
Xinhua Lu
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China, Peng Cheng Laboratory, Shenzhen, China, Key Laboratory of Machine Intelligence and Advanced Computing, MOE, Guangzhou, China
J
Jia-Xin Zhuang
Hong Kong University of Science and Technology, Hong Kong, China
Z
Zhilin Zhao
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China, Key Laboratory of Machine Intelligence and Advanced Computing, MOE, Guangzhou, China
Wei-Shi Zheng
Wei-Shi Zheng
Professor @ SUN YAT-SEN UNIVERSITY
Computer VisionPattern RecognitionMachine Learning
Ruixuan Wang
Ruixuan Wang
Sun Yat-Sen University
Computer visionpattern recognitionmachine learningmedical image analysis