🤖 AI Summary
This work addresses the challenge of studying bias in large-scale pretrained language models, where prohibitive training costs hinder direct experimentation during pretraining. To overcome this, the authors propose using small-scale, low-cost BabyLMs as proxy sandboxes, replicating pretraining within a compact BERT architecture on editable corpora. They demonstrate that these proxies faithfully reproduce the bias dynamics and performance evolution observed in full-scale BERT. By integrating diverse pretraining and post-training debiasing methods, the framework reduces experimental costs from over 500 GPU hours to under 30 hours. This efficient setup not only replicates established findings but also uncovers the critical roles of gender imbalance and toxic content in bias formation, establishing a new, accessible, and reproducible paradigm for pretraining-stage debiasing research.
📝 Abstract
Pre-trained language models (LMs) have, over the last few years, grown substantially in both societal adoption and training costs. This rapid growth in size has constrained progress in understanding and mitigating their biases. Since re-training LMs is prohibitively expensive, most debiasing work has focused on post-hoc or masking-based strategies, which often fail to address the underlying causes of bias. In this work, we seek to democratise pre-model debiasing research by using low-cost proxy models. Specifically, we investigate BabyLMs, compact BERT-like models trained on small and mutable corpora that can approximate bias acquisition and learning dynamics of larger models. We show that BabyLMs display closely aligned patterns of intrinsic bias formation and performance development compared to standard BERT models, despite their drastically reduced size. Furthermore, correlations between BabyLMs and BERT hold across multiple intra-model and post-model debiasing methods. Leveraging these similarities, we conduct pre-model debiasing experiments with BabyLMs, replicating prior findings and presenting new insights regarding the influence of gender imbalance and toxicity on bias formation. Our results demonstrate that BabyLMs can serve as an effective sandbox for large-scale LMs, reducing pre-training costs from over 500 GPU-hours to under 30 GPU-hours. This provides a way to democratise pre-model debiasing research and enables faster, more accessible exploration of methods for building fairer LMs.