🤖 AI Summary
This paper addresses the longstanding absence of economic compensation for human data producers in large language model (LLM) training. Method: Drawing on labor economics, the study quantifies the implicit human labor cost embedded in LLM training data by modeling workforce requirements—based on corpus size statistics from 64 mainstream LLMs (2016–2024) and conservative hourly wage assumptions—to estimate the labor expenditure needed for data rewriting. Contribution/Results: The analysis reveals that human labor costs constitute 10–1,000× the total computational and engineering expenditures for model training, vastly exceeding prevailing infrastructure investments. Consequently, the paper advances the normative claim that data producers must be recognized as primary stakeholders entitled to remuneration, directly challenging the industry’s default practice of uncompensated data acquisition. By providing rigorous empirical grounding and theoretical framing, this work establishes a foundational basis for equitable and sustainable data economies.
📝 Abstract
Training a state-of-the-art Large Language Model (LLM) is an increasingly expensive endeavor due to growing computational, hardware, energy, and engineering demands. Yet, an often-overlooked (and seldom paid) expense is the human labor behind these models' training data. Every LLM is built on an unfathomable amount of human effort: trillions of carefully written words sourced from books, academic papers, codebases, social media, and more. This position paper aims to assign a monetary value to this labor and argues that the most expensive part of producing an LLM should be the compensation provided to training data producers for their work. To support this position, we study 64 LLMs released between 2016 and 2024, estimating what it would cost to pay people to produce their training datasets from scratch. Even under highly conservative estimates of wage rates, the costs of these models' training datasets are 10-1000 times larger than the costs to train the models themselves, representing a significant financial liability for LLM providers. In the face of the massive gap between the value of training data and the lack of compensation for its creation, we highlight and discuss research directions that could enable fairer practices in the future.