🤖 AI Summary
When crowdworkers lack demographic representativeness, models trained on their annotations risk systematic bias—deviating from the true population’s perspectives. To address this, we propose PAIR (Population-Aligned Instance Replication), the first method to adapt survey research’s population alignment principle to NLP data construction. PAIR statistically models annotator-type distributions and corrects annotation bias without collecting new labels: it replicates underrepresented annotators’ instances proportionally to their population prevalence, then combines importance reweighting with synthetic data generation. Evaluated on hate speech detection, PAIR significantly improves model calibration—reducing Expected Calibration Error (ECE) by over 40%—and achieves performance nearly matching that of models trained on ideal representative data. Moreover, it enhances cross-demographic fairness and out-of-distribution generalization, demonstrating robustness across diverse population subgroups.
📝 Abstract
Models trained on crowdsourced labels may not reflect broader population views when annotator pools are not representative. Since collecting representative labels is challenging, we propose Population-Aligned Instance Replication (PAIR), a method to address this bias through statistical adjustment. Using a simulation study of hate speech and offensive language detection, we create two types of annotators with different labeling tendencies and generate datasets with varying proportions of the types. Models trained on unbalanced annotator pools show poor calibration compared to those trained on representative data. However, PAIR, which duplicates labels from underrepresented annotator groups to match population proportions, significantly reduces bias without requiring new data collection. These results suggest statistical techniques from survey research can help align model training with target populations even when representative annotator pools are unavailable. We conclude with three practical recommendations for improving training data quality.