🤖 AI Summary
To address the scarcity of labeled data and low query efficiency during the cold-start phase of active learning, this paper proposes a counterfactual data augmentation method grounded in Variation Theory. It pioneers the cross-domain adaptation of educational Variation Theory to active learning, establishing a neuro-symbolic collaborative framework. Leveraging a pipeline that integrates large language models with rule-based engines, the method generates semantically controlled counterfactual samples that emphasize essential conceptual distinctions, thereby guiding models to precisely identify class boundaries. Empirical results demonstrate significantly improved data efficiency under low-labeling budgets: text classification accuracy increases by 12.3% over baselines; the performance gain naturally diminishes as labeling effort increases—confirming its efficacy specifically for cold-start scenarios. Key contributions include (i) the theoretical transfer of Variation Theory from education to machine learning, (ii) a novel mechanism for semantics-aware, controllable counterfactual generation, and (iii) a discrimination-oriented active querying paradigm.
📝 Abstract
Active Learning (AL) allows models to learn interactively from user feedback. This paper introduces a counterfactual data augmentation approach to AL, particularly addressing the selection of datapoints for user querying, a pivotal concern in enhancing data efficiency. Our approach is inspired by Variation Theory, a theory of human concept learning that emphasizes the essential features of a concept by focusing on what stays the same and what changes. Instead of just querying with existing datapoints, our approach synthesizes artificial datapoints that highlight potential key similarities and differences among labels using a neuro-symbolic pipeline combining large language models (LLMs) and rule-based models. Through an experiment in the example domain of text classification, we show that our approach achieves significantly higher performance when there are fewer annotated data. As the annotated training data gets larger the impact of the generated data starts to diminish showing its capability to address the cold start problem in AL. This research sheds light on integrating theories of human learning into the optimization of AL.