🤖 AI Summary
This study addresses the lack of transparency and traceability in the training data lifecycle of large language models (LLMs), which undermines their trustworthiness and auditability. Through a systematic review of 95 relevant publications over the past decade, this work proposes the first unified classification framework tailored to the LLM data lifecycle, structured around three core dimensions: data provenance, transparency, and traceability. The framework integrates key technical approaches—including data generation, watermarking, bias measurement, privacy preservation, and governance tools—thereby clarifying the field’s boundaries and revealing inherent trade-offs between transparency and opacity. Furthermore, it synthesizes emerging research trends and open challenges, offering both a theoretical foundation and practical guidance for enhancing the credibility of LLM training data.
📝 Abstract
Large language models (LLMs) are deployed at scale, yet their training data life cycle remains opaque. This survey synthesizes research from the past ten years on three tightly coupled axes: (1) data provenance, (2) transparency, and (3) traceability, and three supporting pillars: (4) bias \&uncertainty, (5) data privacy, and (6) tools and techniques that operationalize them. A central contribution is a proposed taxonomy defining the field's domains and listing corresponding artifacts. Through analysis of 95 publications, this work identifies key methodologies concerning data generation, watermarking, bias measurement, data curation, data privacy, and the inherent trade-off between transparency and opacity.