🤖 AI Summary
This work addresses the persistent risk of privacy leakage in existing language model unlearning methods, which often fail to fully erase information from forgotten data. To tackle this challenge, the authors propose a fine-grained auditing framework grounded in Partial Information Decomposition (PID), leveraging mutual information analysis and representation learning to quantify redundant residual knowledge retained in model representations before and after unlearning. By explicitly linking such residual information to vulnerability against adversarial reconstruction attacks, this study pioneers the application of PID to evaluate unlearning efficacy through an information-theoretic lens. Furthermore, a representation-based risk scoring mechanism is introduced to effectively identify sensitive inputs, thereby enhancing privacy protection during inference. Experimental results reveal widespread information remnants across current unlearning algorithms, demonstrating that the proposed framework offers a practical tool for secure deployment.
📝 Abstract
We expose a critical limitation in current approaches to machine unlearning in language models: despite the apparent success of unlearning algorithms, information about the forgotten data remains linearly decodable from internal representations. To systematically assess this discrepancy, we introduce an interpretable, information-theoretic framework for auditing unlearning using Partial Information Decomposition (PID). By comparing model representations before and after unlearning, we decompose the mutual information with the forgotten data into distinct components, formalizing the notions of unlearned and residual knowledge. Our analysis reveals that redundant information, shared across both models, constitutes residual knowledge that persists post-unlearning and correlates with susceptibility to known adversarial reconstruction attacks. Leveraging these insights, we propose a representation-based risk score that can guide abstention on sensitive inputs at inference time, providing a practical mechanism to mitigate privacy leakage. Our work introduces a principled, representation-level audit for unlearning, offering theoretical insight and actionable tools for safer deployment of language models.