A Comprehensive Evaluation of LLM Unlearning Robustness under Multi-Turn Interaction

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical gap in the evaluation of large language model (LLM) unlearning methods, which often demonstrate strong performance under static assessment but whose efficacy in real-world interactive settings remains unclear. The authors present the first systematic investigation into how multi-turn interactions—such as self-correction and conditional querying—affect the stability of unlearned knowledge. Their experiments reveal that static evaluations substantially overestimate unlearning effectiveness: most ostensibly “successful” unlearning outcomes are easily reversed through natural dialogue dynamics. While stronger unlearning techniques improve robustness, they frequently induce behavioral rigidity rather than genuine knowledge removal. These findings expose fundamental limitations of current unlearning approaches in practical deployment scenarios and provide crucial insights for designing more robust and reliable unlearning mechanisms.

Technology Category

Application Category

📝 Abstract
Machine unlearning aims to remove the influence of specific training data from pre-trained models without retraining from scratch, and is increasingly important for large language models (LLMs) due to safety, privacy, and legal concerns. Although prior work primarily evaluates unlearning in static, single-turn settings, forgetting robustness under realistic interactive use remains underexplored. In this paper, we study whether unlearning remains stable in interactive environments by examining two common interaction patterns: self-correction and dialogue-conditioned querying. We find that knowledge appearing forgotten in static evaluation can often be recovered through interaction. Although stronger unlearning improves apparent robustness, it often results in behavioral rigidity rather than genuine knowledge erasure. Our findings suggest that static evaluation may overestimate real-world effectiveness and highlight the need for ensuring stable forgetting under interactive settings.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
LLM robustness
multi-turn interaction
forgetting stability
interactive evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM unlearning
multi-turn interaction
forgetting robustness
self-correction
dialogue-conditioned querying
🔎 Similar Papers
No similar papers found.