Distribution Preference Optimization: A Fine-grained Perspective for LLM Unlearning

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing effective data forgetting with utility preservation in large language models (LLMs), this paper proposes a distribution-level fine-grained unlearning method. Unlike existing negative preference optimization (NPO) approaches that rely on explicit response generation, our method is the first to construct implicit positive–negative preference signals directly at the output probability distribution level: high-confidence logits are selectively amplified or suppressed to form distribution pairs, enabling efficient, domain-knowledge-free fine-tuning. We theoretically prove that the proposed loss function aligns with the desired forgetting direction. Empirical evaluation shows state-of-the-art forgetting performance on the TOFU benchmark and significant improvements over prior methods on the MUSE benchmark, while maintaining strong task utility and demonstrating excellent scalability.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) demonstrate remarkable capabilities learned from vast corpora, concerns regarding data privacy and safety are receiving increasing attention. LLM unlearning, which aims to remove the influence of specific data while preserving overall model utility, is becoming an important research area. One of the mainstream unlearning classes is optimization-based methods, which achieve forgetting directly through fine-tuning, exemplified by Negative Preference Optimization (NPO). However, NPO's effectiveness is limited by its inherent lack of explicit positive preference signals. Attempts to introduce such signals by constructing preferred responses often necessitate domain-specific knowledge or well-designed prompts, fundamentally restricting their generalizability. In this paper, we shift the focus to the distribution-level, directly targeting the next-token probability distribution instead of entire responses, and derive a novel unlearning algorithm termed extbf{Di}stribution extbf{P}reference extbf{O}ptimization (DiPO). We show that the requisite preference distribution pairs for DiPO, which are distributions over the model's output tokens, can be constructed by selectively amplifying or suppressing the model's high-confidence output logits, thereby effectively overcoming NPO's limitations. We theoretically prove the consistency of DiPO's loss function with the desired unlearning direction. Extensive experiments demonstrate that DiPO achieves a strong trade-off between model utility and forget quality. Notably, DiPO attains the highest forget quality on the TOFU benchmark, and maintains leading scalability and sustainability in utility preservation on the MUSE benchmark.
Problem

Research questions and friction points this paper is trying to address.

Develops fine-grained unlearning method for LLMs using token distributions
Addresses limitations of preference optimization lacking positive signals
Achieves balance between model utility and targeted forgetting quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

DiPO targets next-token probability distribution for unlearning
Constructs preference pairs by adjusting output logits confidence
Achieves balance between model utility and forget quality
🔎 Similar Papers
No similar papers found.