🤖 AI Summary
This work addresses the limitation of traditional human-in-the-loop learning, which often reduces human input to simple label provision and fails to capture the nuanced judgments humans make. To overcome this, the authors propose a novel interactive learning framework that moves beyond conventional label queries by incorporating richer interaction mechanisms—such as ranking and exemplar selection—and formalizing corresponding probabilistic models of human responses. Building on this foundation, they design an active learning algorithm that maximizes information gain per interaction. Empirical evaluations on word sentiment and image aesthetics datasets demonstrate the approach’s effectiveness: it substantially reduces sample complexity, achieving over a 57% reduction in learning time compared to standard label-based active learning in the word sentiment task.
📝 Abstract
Integrating human expertise into machine learning systems often reduces the role of experts to labeling oracles, a paradigm that limits the amount of information exchanged and fails to capture the nuances of human judgment. We address this challenge by developing a human-in-the-loop framework to learn binary classifiers with rich query types, consisting of item ranking and exemplar selection. We first introduce probabilistic human response models for these rich queries motivated by the relationship experimentally observed between the perceived implicit score of an item and its distance to the unknown classifier. Using these models, we then design active learning algorithms that leverage the rich queries to increase the information gained per interaction. We provide theoretical bounds on sample complexity and develop a tractable and computationally efficient variational approximation. Through experiments with simulated annotators derived from crowdsourced word-sentiment and image-aesthetic datasets, we demonstrate significant reductions on sample complexity. We further extend active learning strategies to select queries that maximize information rate, explicitly balancing informational value against annotation cost. This algorithm in the word sentiment classification task reduces learning time by more than 57\% compared to traditional label-only active learning.