ALScope: A Unified Toolkit for Deep Active Learning

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep active learning (DAL) algorithms lack a unified, fair evaluation platform—particularly for systematically assessing robustness under realistic challenges such as distribution shift (e.g., open-set recognition) and class imbalance. To address this, we propose ALScope: the first multi-domain, multi-challenge unified benchmarking framework for DAL. It integrates 10 diverse computer vision and natural language processing datasets with 21 state-of-the-art algorithms, and supports configurable experimental settings—including open-set recognition, out-of-distribution detection, and imbalanced learning. Its modular architecture enables multidimensional quantitative analysis of algorithmic performance, query efficiency, and generalization capability. Extensive experiments reveal substantial performance variance across non-standard scenarios; notably, several high-accuracy methods suffer from prohibitive query latency, exposing critical practical bottlenecks. ALScope thus establishes a foundational benchmark and identifies concrete optimization directions for future DAL algorithm design and real-world deployment.

Technology Category

Application Category

📝 Abstract
Deep Active Learning (DAL) reduces annotation costs by selecting the most informative unlabeled samples during training. As real-world applications become more complex, challenges stemming from distribution shifts (e.g., open-set recognition) and data imbalance have gained increasing attention, prompting the development of numerous DAL algorithms. However, the lack of a unified platform has hindered fair and systematic evaluation under diverse conditions. Therefore, we present a new DAL platform ALScope for classification tasks, integrating 10 datasets from computer vision (CV) and natural language processing (NLP), and 21 representative DAL algorithms, including both classical baselines and recent approaches designed to handle challenges such as distribution shifts and data imbalance. This platform supports flexible configuration of key experimental factors, ranging from algorithm and dataset choices to task-specific factors like out-of-distribution (OOD) sample ratio, and class imbalance ratio, enabling comprehensive and realistic evaluation. We conduct extensive experiments on this platform under various settings. Our findings show that: (1) DAL algorithms' performance varies significantly across domains and task settings; (2) in non-standard scenarios such as imbalanced and open-set settings, DAL algorithms show room for improvement and require further investigation; and (3) some algorithms achieve good performance, but require significantly longer selection time.
Problem

Research questions and friction points this paper is trying to address.

Lack of unified platform for fair DAL algorithm evaluation
Challenges from distribution shifts and data imbalance in DAL
Need comprehensive evaluation under diverse real-world conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified platform for Deep Active Learning evaluation
Integrates diverse datasets and DAL algorithms
Supports flexible experimental configuration settings
🔎 Similar Papers
No similar papers found.
C
Chenkai Wu
Monash University, Australia
Y
Yuanyuan Qi
Monash University, Australia
Xiaohao Yang
Xiaohao Yang
Google
Pair Distribution FunctionX-rayDiffraction
Jueqing Lu
Jueqing Lu
Monash University
Machine Learning
G
Gang Liu
Harbin Engineering University, China
Wray Buntine
Wray Buntine
Professor, VinUniversity
Machine Learning
L
Lan Du
Monash University, Australia