🤖 AI Summary
To address the high cost of expert annotation and severe label noise in crowdsourced data for sequence-to-sequence multi-output tasks, this paper proposes CAMEL—the first pool-based active learning framework that unifies active learning with label correction. CAMEL introduces a sequence-level confidence-driven hybrid annotation paradigm: it requests expert annotations only for local segments while leveraging self-supervised models (e.g., Transformers) to complete global predictions. It further incorporates prediction consistency constraints, iterative confidence reweighting, and label refinement mechanisms to automatically correct noisy labels. This is the first approach to jointly minimize expert annotation effort and maximize self-supervision. Experiments on dialogue state tracking demonstrate over 40% improvement in annotation efficiency; downstream models trained on corrected data achieve F1 gains of 2.3–3.8 points, empirically validating substantial improvements in label quality.
📝 Abstract
Supervised neural approaches are hindered by their dependence on large, meticulously annotated datasets, a requirement that is particularly cumbersome for sequential tasks. The quality of annotations tends to deteriorate with the transition from expert-based to crowd-sourced labeling. To address these challenges, we present CAMEL (Confidence-based Acquisition Model for Efficient self-supervised active Learning), a pool-based active learning framework tailored to sequential multi-output problems. CAMEL possesses two core features: (1) it requires expert annotators to label only a fraction of a chosen sequence, and (2) it facilitates self-supervision for the remainder of the sequence. By deploying a label correction mechanism, CAMEL can also be utilized for data cleaning. We evaluate CAMEL on two sequential tasks, with a special emphasis on dialogue belief tracking, a task plagued by the constraints of limited and noisy datasets. Our experiments demonstrate that CAMEL significantly outperforms the baselines in terms of efficiency. Furthermore, the data corrections suggested by our method contribute to an overall improvement in the quality of the resulting datasets.1