🤖 AI Summary
To address the challenges of tracing provenance and ensuring credibility of large language model (LLM)–generated content, this paper proposes the first holistic four-dimensional provenance framework integrating both model- and data-centric perspectives: model origin identification, architectural and mechanistic analysis, training data attribution, and external information verification. We introduce a novel “prior–posterior” dual-paradigm classification system and unify techniques including model fingerprinting, response-level verification, and traceability-aware embedding to support both proactive and reactive reasoning. The framework systematically consolidates fragmented provenance research efforts, significantly enhancing the explainability, verifiability, and transparency of AI-generated content. It establishes a theoretical foundation and scalable technical methodology for detecting AI-generated content (AIGC), identifying model identities, and ensuring information reliability.
📝 Abstract
The rapid advancement of large language models (LLMs) has revolutionized artificial intelligence, shifting from supporting objective tasks (e.g., recognition) to empowering subjective decision-making (e.g., planning, decision). This marks the dawn of general and powerful AI, with applications spanning a wide range of fields, including programming, education, healthcare, finance, and law. However, their deployment introduces multifaceted risks. Due to the black-box nature of LLMs and the human-like quality of their generated content, issues such as hallucinations, bias, unfairness, and copyright infringement become particularly significant. In this context, sourcing information from multiple perspectives is essential.
This survey presents a systematic investigation into provenance tracking for content generated by LLMs, organized around four interrelated dimensions that together capture both model- and data-centric perspectives. From the model perspective, Model Sourcing treats the model as a whole, aiming to distinguish content generated by specific LLMs from content authored by humans. Model Structure Sourcing delves into the internal generative mechanisms, analyzing architectural components that shape the outputs of model. From the data perspective, Training Data Sourcing focuses on internal attribution, tracing the origins of generated content back to the training data of model. In contrast, External Data Sourcing emphasizes external validation, identifying external information used to support or influence the responses of model. Moreover, we also propose a dual-paradigm taxonomy that classifies existing sourcing methods into prior-based (proactive traceability embedding) and posterior-based (retrospective inference) approaches. Traceability across these dimensions enhances the transparency, accountability, and trustworthiness of LLMs deployment in real-world applications.