Fine-Tuned LLMs are"Time Capsules"for Tracking Societal Bias Through Books

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study quantifies diachronic shifts in societal biases—specifically gender, sexual orientation, race, and religion—across 1950–2019. Methodologically, it introduces fine-tuned Llama-2/3 models as “temporal capsules,” integrated with prompt engineering and BookPAGE, the first decade-annotated fictional corpus (593 novels), to establish a bias-oriented evaluation framework. Key contributions are threefold: (1) a novel LLM-fine-tuning–driven paradigm for temporal bias analysis; (2) the release of BookPAGE, the first literary bias corpus with fine-grained decade-level annotations; and (3) empirical evidence demonstrating that bias representations stem predominantly from training data rather than model architecture. Major findings include: depictions of female leadership rising from 8% (1950s) to 22% (2010s); mentions of same-sex relationships increasing from 0% (1980s) to 10% (2000s); and negative portrayals of Islam surging by 12 percentage points in the 2000s—collectively confirming strong alignment between model-embedded biases and contemporaneous sociocultural trends.

Technology Category

Application Category

📝 Abstract
Books, while often rich in cultural insights, can also mirror societal biases of their eras - biases that Large Language Models (LLMs) may learn and perpetuate during training. We introduce a novel method to trace and quantify these biases using fine-tuned LLMs. We develop BookPAGE, a corpus comprising 593 fictional books across seven decades (1950-2019), to track bias evolution. By fine-tuning LLMs on books from each decade and using targeted prompts, we examine shifts in biases related to gender, sexual orientation, race, and religion. Our findings indicate that LLMs trained on decade-specific books manifest biases reflective of their times, with both gradual trends and notable shifts. For example, model responses showed a progressive increase in the portrayal of women in leadership roles (from 8% to 22%) from the 1950s to 2010s, with a significant uptick in the 1990s (from 4% to 12%), possibly aligning with third-wave feminism. Same-sex relationship references increased markedly from the 1980s to 2000s (from 0% to 10%), mirroring growing LGBTQ+ visibility. Concerningly, negative portrayals of Islam rose sharply in the 2000s (26% to 38%), likely reflecting post-9/11 sentiments. Importantly, we demonstrate that these biases stem mainly from the books' content and not the models' architecture or initial training. Our study offers a new perspective on societal bias trends by bridging AI, literary studies, and social science research.
Problem

Research questions and friction points this paper is trying to address.

Tracking societal bias evolution using fine-tuned LLMs
Quantifying bias shifts in gender, race, and religion
Analyzing bias trends in fictional books over seven decades
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLMs track societal bias
BookPAGE corpus spans seven decades
Targeted prompts quantify bias shifts
🔎 Similar Papers
No similar papers found.