DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The lack of transparency in training data for deep learning models poses significant privacy and copyright risks, while the robustness of existing dataset auditing techniques against adversarial attacks remains poorly understood. Method: We introduce DATABench, the first benchmark to systematically evaluate auditing methods under evasion attacks (17 variants) and fabrication attacks (5 variants). We propose a novel two-dimensional taxonomy—based on internal and external features—and design defense strategies including feature disentanglement, removal, and detection, coupled with adversarial sample generation to implement fabrication attacks. Contribution/Results: Experiments reveal that state-of-the-art auditing methods fail significantly under both attack types, exhibiting severe deficiencies in robustness and discriminative power. This work uncovers critical security vulnerabilities in data auditing, providing a rigorous benchmark, methodological framework, and empirical foundation for developing trustworthy, interference-resilient auditing mechanisms.

Technology Category

Application Category

📝 Abstract
The widespread application of Deep Learning across diverse domains hinges critically on the quality and composition of training datasets. However, the common lack of disclosure regarding their usage raises significant privacy and copyright concerns. Dataset auditing techniques, which aim to determine if a specific dataset was used to train a given suspicious model, provide promising solutions to addressing these transparency gaps. While prior work has developed various auditing methods, their resilience against dedicated adversarial attacks remains largely unexplored. To bridge the gap, this paper initiates a comprehensive study evaluating dataset auditing from an adversarial perspective. We start with introducing a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing). Subsequently, we formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset. Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery. These formulations and strategies lead to our new benchmark, DATABench, comprising 17 evasion attacks, 5 forgery attacks, and 9 representative auditing methods. Extensive evaluations using DATABench reveal that none of the evaluated auditing methods are sufficiently robust or distinctive under adversarial settings. These findings underscore the urgent need for developing a more secure and reliable dataset auditing method capable of withstanding sophisticated adversarial manipulation. Code is available at https://github.com/shaoshuo-ss/DATABench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating dataset auditing robustness against adversarial attacks
Classifying auditing methods by internal vs external features
Proposing evasion and forgery attacks to test auditing methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces taxonomy for dataset auditing methods
Proposes evasion and forgery attack strategies
Develops DATABench benchmark for evaluation
🔎 Similar Papers
No similar papers found.