Test-Time Adaptation with Binary Feedback

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing test-time adaptation (TTA) methods fail under severe domain shifts and incur high annotation costs by requiring full categorical labels. To address these limitations, this paper proposes a lightweight TTA paradigm that relies solely on sparse binary feedback (correct/incorrect) during inference. We introduce the first binary-feedback-driven adaptation setting and present BiTTA, a dual-path framework: one path employs reinforcement learning to model prediction uncertainty and optimize adaptive policies; the other applies consistency regularization for unsupervised self-adaptation. Crucially, BiTTA operates without access to ground-truth labels, drastically reducing human intervention. Evaluated on benchmarks with strong domain shifts, BiTTA achieves an average accuracy gain of 13.3 percentage points over state-of-the-art TTA and active adaptation methods. Our approach establishes a new paradigm for low-cost, robust online model evolution under distributional shift.

Technology Category

Application Category

📝 Abstract
Deep learning models perform poorly when domain shifts exist between training and test data. Test-time adaptation (TTA) is a paradigm to mitigate this issue by adapting pre-trained models using only unlabeled test samples. However, existing TTA methods can fail under severe domain shifts, while recent active TTA approaches requiring full-class labels are impractical due to high labeling costs. To address this issue, we introduce a new setting of TTA with binary feedback. This setting uses a few binary feedback inputs from annotators to indicate whether model predictions are correct, thereby significantly reducing the labeling burden of annotators. Under the setting, we propose BiTTA, a novel dual-path optimization framework that leverages reinforcement learning to balance binary feedback-guided adaptation on uncertain samples with agreement-based self-adaptation on confident predictions. Experiments show BiTTA achieves 13.3%p accuracy improvements over state-of-the-art baselines, demonstrating its effectiveness in handling severe distribution shifts with minimal labeling effort. The source code is available at https://github.com/taeckyung/BiTTA.
Problem

Research questions and friction points this paper is trying to address.

Adapting deep learning models to domain shifts with minimal labeling effort
Mitigating severe domain shifts using binary feedback for model correction
Balancing feedback-guided and self-adaptation for improved test-time performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Binary feedback reduces labeling burden
Dual-path optimization balances feedback and self-adaptation
Reinforcement learning handles severe domain shifts
🔎 Similar Papers
No similar papers found.