π€ AI Summary
This work addresses the instability and performance limitations of few-shot learning during inference, which arise from the common assumption of batch-wise independence that prevents leveraging historical query samples. To overcome this, the authors propose the Incremental Prototype Enhancement Classifier (IPEC), which dynamically constructs an auxiliary set of high-confidence query samples and fuses it with the support set to progressively refine class prototypes. IPEC incorporates a dual-filtering mechanism that balances global confidence and local discriminability, along with a Bayesian-inspired prototype update strategy that treats the support set as a prior and the auxiliary set as likelihood-derived evidence. A two-stage βwarm-upβtestβ inference protocol is introduced to move beyond static prototype representations. Extensive experiments demonstrate that IPEC significantly outperforms existing methods across multiple few-shot classification benchmarks, effectively enhancing both prototype stability and classification accuracy.
π Abstract
Metric-based few-shot approaches have gained significant popularity due to their relatively straightforward implementation, high interpret ability, and computational efficiency. However, stemming from the batch-independence assumption during testing, which prevents the model from leveraging valuable knowledge accumulated from previous batches. To address these challenges, we propose a novel test-time method called Incremental Prototype Enhancement Classifier (IPEC), a test-time method that optimizes prototype estimation by leveraging information from previous query samples. IPEC maintains a dynamic auxiliary set by selectively incorporating query samples that are classified with high confidence. To ensure sample quality, we design a robust dual-filtering mechanism that assesses each query sample based on both global prediction confidence and local discriminative ability. By aggregating this auxiliary set with the support set in subsequent tasks, IPEC builds progressively more stable and representative prototypes, effectively reducing its reliance on the initial support set. We ground this approach in a Bayesian interpretation, conceptualizing the support set as a prior and the auxiliary set as a data-driven posterior, which in turn motivates the design of a practical"warm-up and test"two-stage inference protocol. Extensive empirical results validate the superior performance of our proposed method across multiple few-shot classification tasks.