Cleaning the Pool: Progressive Filtering of Unlabeled Pools in Deep Active Learning

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In active learning (AL), single-strategy selection often fails to maintain optimality across the entire labeling cycle, leading to unstable performance. To address this, we propose REFINE—a dynamic multi-strategy ensemble framework featuring a two-stage mechanism: progressive filtering and coverage-aware batch selection. Its core innovation lies in a scalable weighted fusion module that dynamically integrates heterogeneous sample values—such as uncertainty and representativeness—to preserve high-value instances across multiple dimensions, while leveraging coverage optimization to enhance batch diversity. Extensive experiments across six benchmark datasets and three base models demonstrate that REFINE consistently outperforms both single-strategy baselines and state-of-the-art ensemble methods. Moreover, the refined unlabeled pool produced by REFINE significantly boosts the performance of arbitrary AL strategies, thereby improving the stability, adaptability, and generalization capability of the AL process.

Technology Category

Application Category

📝 Abstract
Existing active learning (AL) strategies capture fundamentally different notions of data value, e.g., uncertainty or representativeness. Consequently, the effectiveness of strategies can vary substantially across datasets, models, and even AL cycles. Committing to a single strategy risks suboptimal performance, as no single strategy dominates throughout the entire AL process. We introduce REFINE, an ensemble AL method that combines multiple strategies without knowing in advance which will perform best. In each AL cycle, REFINE operates in two stages: (1) Progressive filtering iteratively refines the unlabeled pool by considering an ensemble of AL strategies, retaining promising candidates capturing different notions of value. (2) Coverage-based selection then chooses a final batch from this refined pool, ensuring all previously identified notions of value are accounted for. Extensive experiments across 6 classification datasets and 3 foundation models show that REFINE consistently outperforms individual strategies and existing ensemble methods. Notably, progressive filtering serves as a powerful preprocessing step that improves the performance of any individual AL strategy applied to the refined pool, which we demonstrate on an audio spectrogram classification use case. Finally, the ensemble of REFINE can be easily extended with upcoming state-of-the-art AL strategies.
Problem

Research questions and friction points this paper is trying to address.

Combines multiple active learning strategies adaptively
Ensures coverage of diverse data value notions
Improves performance across datasets and models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble method combines multiple active learning strategies
Progressive filtering refines unlabeled pool iteratively
Coverage-based selection ensures diverse value notions accounted
🔎 Similar Papers
No similar papers found.
D
Denis Huseljic
Intelligent Embedded Systems, University of Kassel
M
Marek Herde
Intelligent Embedded Systems, University of Kassel
Lukas Rauch
Lukas Rauch
University of Kassel
Deep LearningSelf-Supervised LearningActive LearningBioacoustics
P
Paul Hahn
Intelligent Embedded Systems, University of Kassel
Bernhard Sick
Bernhard Sick
Professor of Intelligent Embedded Systems, University of Kassel
Machine LearningPattern RecognitionAutonomous LearningIntelligent SystemsOrganic Computing