Holistic Audit Dataset Generation for LLM Unlearning via Knowledge Graph Traversal and Redundancy Removal

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical challenges in auditing large language model (LLM) unlearning—namely, small-scale audit datasets, incomplete knowledge coverage, and severe semantic redundancy among forgotten/retained samples. To this end, we propose HANKER, the first automated audit dataset generation framework leveraging knowledge graph traversal and semantic redundancy elimination. HANKER enables fine-grained knowledge coverage and systematic deduplication, substantially improving audit comprehensiveness. Applied to the MUSE benchmark, it generates over 180,000 high-quality audit instances and uncovers thousands of previously undetected cases of implicit knowledge retention. Empirical analysis reveals that semantic redundancy inflates ROUGE and textual entailment scores by 6.4% and 2.8%, respectively, underscoring the necessity of redundancy removal for accurate unlearning evaluation. Our work establishes a novel paradigm and foundational infrastructure for trustworthy assessment of LLM privacy compliance and copyright safety.

Technology Category

Application Category

📝 Abstract
In recent years, Large Language Models (LLMs) have faced increasing demands to selectively remove sensitive information, protect privacy, and comply with copyright regulations through unlearning, by Machine Unlearning. While evaluating unlearning effectiveness is crucial, existing benchmarks are limited in scale and comprehensiveness, typically containing only a few hundred test cases. We identify two critical challenges in generating holistic audit datasets: ensuring audit adequacy and handling knowledge redundancy between forget and retain dataset. To address these challenges, we propose HANKER, an automated framework for holistic audit dataset generation leveraging knowledge graphs to achieve fine-grained coverage and eliminate redundant knowledge. Applying HANKER to the popular MUSE benchmark, we successfully generated over 69,000 and 111,000 audit cases for the News and Books datasets respectively, identifying thousands of knowledge memorization instances that the previous benchmark failed to detect. Our empirical analysis uncovers how knowledge redundancy significantly skews unlearning effectiveness metrics, with redundant instances artificially inflating the observed memorization measurements ROUGE from 19.7% to 26.1% and Entailment Scores from 32.4% to 35.2%, highlighting the necessity of systematic deduplication for accurate assessment.
Problem

Research questions and friction points this paper is trying to address.

Generate comprehensive audit datasets
Address knowledge redundancy challenges
Improve unlearning effectiveness evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge graph traversal for audit
Systematic redundancy removal technique
Automated dataset generation framework
🔎 Similar Papers
No similar papers found.