Meta-learning Representations for Learning from Multiple Annotators

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses few-shot classification under multi-annotator settings, where models must learn robust classifiers from limited and noisy crowd-sourced labels. We propose the first differentiable meta-learning framework that integrates the Expectation-Maximization (EM) algorithm end-to-end: within the inner loop, EM computes closed-form estimates of annotator confusion matrices; in the outer loop, meta-gradients update a shared representation network. Our approach unifies meta-knowledge transfer, neural representation learning, and probabilistic modeling—requiring neither annotator priors nor auxiliary validation sets. Evaluated on synthetic noisy benchmarks and real-world crowdsourced datasets (e.g., Birds, WebVision), our method significantly outperforms existing approaches, achieving high accuracy with only 1–5 noisy examples per class. It effectively mitigates the dual challenges of data scarcity and label noise in few-shot learning with heterogeneous annotators.

Technology Category

Application Category

📝 Abstract
We propose a meta-learning method for learning from multiple noisy annotators. In many applications such as crowdsourcing services, labels for supervised learning are given by multiple annotators. Since the annotators have different skills or biases, given labels can be noisy. To learn accurate classifiers, existing methods require many noisy annotated data. However, sufficient data might be unavailable in practice. To overcome the lack of data, the proposed method uses labeled data obtained in different but related tasks. The proposed method embeds each example in tasks to a latent space by using a neural network and constructs a probabilistic model for learning a task-specific classifier while estimating annotators' abilities on the latent space. This neural network is meta-learned to improve the expected test classification performance when the classifier is adapted to a given small amount of annotated data. This classifier adaptation is performed by maximizing the posterior probability via the expectation-maximization (EM) algorithm. Since each step in the EM algorithm is easily computed as a closed-form and is differentiable, the proposed method can efficiently backpropagate the loss through the EM algorithm to meta-learn the neural network. We show the effectiveness of our method with real-world datasets with synthetic noise and real-world crowdsourcing datasets.
Problem

Research questions and friction points this paper is trying to address.

Learning from multiple noisy annotators efficiently
Overcoming data scarcity in multi-annotator scenarios
Meta-learning representations for improved classifier adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learning for noisy annotators
Neural network embeds tasks
EM algorithm differentiates adaptation
🔎 Similar Papers
No similar papers found.