Next Generation Active Learning: Mixture of LLMs in the Loop

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel active learning framework that addresses the limitations of existing large language models (LLMs) as automatic annotators, which often suffer from insufficient labeling quality for practical deployment. The approach introduces, for the first time, a lightweight mixture-of-LLMs architecture into the active learning loop, integrating annotation disagreement detection with a negative learning mechanism to enhance labeling robustness and overall model performance. Empirical evaluations across multiple tasks demonstrate that the method achieves annotation quality comparable to human labeling, significantly outperforming both single LLMs and alternative ensemble strategies. Furthermore, the framework supports efficient local deployment, making it suitable for resource-constrained environments.

Technology Category

Application Category

📝 Abstract
With the rapid advancement and strong generalization capabilities of large language models (LLMs), they have been increasingly incorporated into the active learning pipelines as annotators to reduce annotation costs. However, considering the annotation quality, labels generated by LLMs often fall short of real-world applicability. To address this, we propose a novel active learning framework, Mixture of LLMs in the Loop Active Learning, replacing human annotators with labels generated through a Mixture-of-LLMs-based annotation model, aimed at enhancing LLM-based annotation robustness by aggregating the strengths of multiple LLMs. To further mitigate the impact of the noisy labels, we introduce annotation discrepancy and negative learning to identify the unreliable annotations and enhance learning effectiveness. Extensive experiments demonstrate that our framework achieves performance comparable to human annotation and consistently outperforms single-LLM baselines and other LLM-ensemble-based approaches. Moreover, our framework is built on lightweight LLMs, enabling it to operate fully on local machines in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

active learning
large language models
annotation quality
noisy labels
human annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of LLMs
Active Learning
Annotation Discrepancy
Negative Learning
Lightweight LLMs
🔎 Similar Papers
No similar papers found.