From Domains to Instances: Dual-Granularity Data Synthesis for LLM Unlearning

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks for unlearning in large language models (LLMs) struggle to distinguish between domain-level and instance-level forgetting and fail to accurately assess real-world unlearning efficacy. To address this, this work proposes BiForget, a framework that formally defines unlearning at two granularities for the first time. BiForget intrinsically leverages the target model itself—via seed-guided and adversarial prompting—to generate highly relevant and diverse unlearning data without relying on external generators. The method establishes an automated pipeline for synthetic data generation and evaluation, achieving approximately 20% higher relevance and 0.05 greater diversity compared to existing approaches on benchmarks such as Harry Potter, while reducing data volume by 50%. Furthermore, it yields more robust unlearning performance and better preserves model utility.

Technology Category

Application Category

📝 Abstract
Although machine unlearning is essential for removing private, harmful, or copyrighted content from LLMs, current benchmarks often fail to faithfully represent the true"forgetting scope"learned by the model. We formalize two distinct unlearning granularities, domain-level and instance-level, and propose BiForget, an automated framework for synthesizing high-quality forget sets. Unlike prior work relying on external generators, BiForget exploits the target model per se to elicit data that matches its internal knowledge distribution through seed-guided and adversarial prompting. Our experiments across diverse benchmarks show that it achieves a superior balance of relevance, diversity, and efficiency. Quantitatively, in the Harry Potter domain, it improves relevance by ${\sim}20$ and diversity by ${\sim}$0.05 while halving the total data size compared to SOTAs. Ultimately, it facilitates more robust forgetting and better utility preservation, providing a more rigorous foundation for evaluating LLM unlearning.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
LLM unlearning
forgetting scope
data synthesis
unlearning granularity
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM unlearning
dual-granularity
data synthesis
adversarial prompting
forget set generation
🔎 Similar Papers
No similar papers found.