🤖 AI Summary
The unclear provenance of training data for black-box AI models poses significant forensic challenges in detecting unauthorized use of sensitive information—such as personal health records, copyrighted texts, and journalistic content. Method: This paper introduces the “Information Isotope” theoretical framework, inspired by isotopic tracing in chemistry, enabling verifiable attribution of training data solely from model outputs. It integrates statistical significance testing, semantic consistency modeling, and length-robust fingerprint design to construct a lightweight, output-only information fingerprinting and matching mechanism. Contribution/Results: Evaluated on 10 state-of-the-art large language models (e.g., GPT-4o, Claude-3.5) and four sensitive datasets (clinical notes, copyrighted books, news articles, etc.), the method achieves 99% classification accuracy (p < 0.001) per generated document, yielding court-admissible forensic evidence. It represents the first approach capable of empirically identifying training-data infringement across diverse black-box models and application domains.
📝 Abstract
In light of scaling laws, many AI institutions are intensifying efforts to construct advanced AIs on extensive collections of high-quality human data. However, in a rush to stay competitive, some institutions may inadvertently or even deliberately include unauthorized data (like privacy- or intellectual property-sensitive content) for AI training, which infringes on the rights of data owners. Compounding this issue, these advanced AI services are typically built on opaque cloud platforms, which restricts access to internal information during AI training and inference, leaving only the generated outputs available for forensics. Thus, despite the introduction of legal frameworks by various countries to safeguard data rights, uncovering evidence of data misuse in modern opaque AI applications remains a significant challenge. In this paper, inspired by the ability of isotopes to trace elements within chemical reactions, we introduce the concept of information isotopes and elucidate their properties in tracing training data within opaque AI systems. Furthermore, we propose an information isotope tracing method designed to identify and provide evidence of unauthorized data usage by detecting the presence of target information isotopes in AI generations. We conduct experiments on ten AI models (including GPT-4o, Claude-3.5, and DeepSeek) and four benchmark datasets in critical domains (medical data, copyrighted books, and news). Results show that our method can distinguish training datasets from non-training datasets with 99% accuracy and significant evidence (p-value$<0.001$) by examining a data entry equivalent in length to a research paper. The findings show the potential of our work as an inclusive tool for empowering individuals, including those without expertise in AI, to safeguard their data rights in the rapidly evolving era of AI advancements and applications.