A Human-Centric Framework for Data Attribution in Large Language Models

๐Ÿ“… 2026-02-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the critical issue of unauthorized use of creatorsโ€™ data in large language model training and the resulting risk of inadvertent plagiarism, which underscores the absence of effective data provenance mechanisms. To tackle this challenge, the paper proposes a human-centered data attribution framework that, for the first time, integrates the interests of creators, users, and platforms into a negotiable, parameterized system. By synergistically combining natural language processing techniques with policy governance and economic incentive mechanisms, the framework enables customizable and verifiable attribution schemes tailored to applications such as creative assistance and fact-checking. This approach not only enhances accountability and fairness in data usage but also fosters the sustainable development of the data economy.

Technology Category

Application Category

๐Ÿ“ Abstract
In the current Large Language Model (LLM) ecosystem, creators have little agency over how their data is used, and LLM users may find themselves unknowingly plagiarizing existing sources. Attribution of LLM-generated text to LLM input data could help with these challenges, but so far we have more questions than answers: what elements of LLM outputs require attribution, what goals should it serve, how should it be implemented? We contribute a human-centric data attribution framework, which situates the attribution problem within the broader data economy. Specific use cases for attribution, such as creative writing assistance or fact-checking, can be specified via a set of parameters (including stakeholder objectives and implementation criteria). These criteria are up for negotiation by the relevant stakeholder groups: creators, LLM users, and their intermediaries (publishers, platforms, AI companies). The outcome of domain-specific negotiations can be implemented and tested for whether the stakeholder goals are achieved. The proposed approach provides a bridge between methodological NLP work on data attribution, governance work on policy interventions, and economic analysis of creator incentives for a sustainable equilibrium in the data economy.
Problem

Research questions and friction points this paper is trying to address.

data attribution
large language models
plagiarism
creator agency
LLM output
Innovation

Methods, ideas, or system contributions that make the work stand out.

data attribution
human-centric framework
large language models
stakeholder negotiation
data economy
๐Ÿ”Ž Similar Papers
No similar papers found.