Fairshare Data Pricing for Large Language Models

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Persistent price unfairness in large language model (LLM) training data markets undermines market efficiency and sustainability. Method: We propose the first fairness-aware pricing framework integrating data value quantification with market equilibrium, embedding Shapley values and gradient-based attribution—state-of-the-art data valuation techniques—into a game-theoretic pricing model coupled with LLM training efficacy simulation to enable marginal contribution–driven dynamic pricing. Contribution/Results: We formally prove that our mechanism achieves dual fairness (fairshare) for both buyers and sellers while respecting buyer budget constraints. Experiments across mathematical reasoning, medical diagnosis, and physics reasoning tasks demonstrate that our framework increases model performance per unit data expenditure by 23–37%, boosts seller revenue by 19–41%, and significantly enhances market participation and long-term viability.

Technology Category

Application Category

📝 Abstract
Training data is a pivotal resource for building large language models (LLMs), but unfair pricing in data markets poses a serious challenge for both data buyers (e.g., LLM builders) and sellers (e.g., human annotators), which discourages market participation, reducing data quantity and quality. In this paper, we propose a fairshare pricing framework that sets training data prices using data valuation methods to quantify their contribution to LLMs. In our framework, buyers make purchasing decisions using data valuation and sellers set prices to maximize their profits based on the anticipated buyer purchases. We theoretically show that pricing derived from our framework is tightly linked to data valuation and buyers' budget, optimal for both buyers and sellers. Through market simulations using current LLMs and datasets (math problems, medical diagnosis, and physical reasoning), we show that our framework is fairshare for buyers by ensuring their purchased data is reflective of model training value, leading to higher LLM task performances per-dollar spent on data, and fairshare for sellers by ensuring they sell their data at optimal prices. Our framework lays the foundation for future research on equitable and sustainable data markets for large-scale AI.
Problem

Research questions and friction points this paper is trying to address.

Large Language Model
Data Market
Price Fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

FairShare Pricing
Large Language Model
AI Data Market
🔎 Similar Papers
No similar papers found.
L
Luyang Zhang
Heinz College of Information Systems and Public Policy, Carnegie Mellon University
Cathy Jiao
Cathy Jiao
Carnegie Mellon University
Natural Language ProcessingData AttributionMachine LearningDeep Learning
B
Beibei Li
Heinz College of Information Systems and Public Policy, Carnegie Mellon University
Chenyan Xiong
Chenyan Xiong
Associate Professor, Carnegie Mellon University
Information RetrievalLanguage ModelsNatural Language Understanding.