🤖 AI Summary
This paper addresses three core limitations of existing LLM unlearning evaluation frameworks: (1) lack of robustness, (2) vulnerability of metrics to red-teaming attacks, and (3) entanglement between unlearning efficacy and knowledge retention, hindering disentangled assessment. To this end, we propose the first comparable and trustworthy LLM unlearning robustness evaluation framework. Methodologically: (1) we design a red-team-resistant metric selection mechanism to ensure evaluation stability; (2) we introduce non-target data performance calibration to decouple unlearning capability from retention capability; and (3) we employ Pareto frontier analysis for objective, multi-objective trade-off evaluation. Our contributions include the first independent, stable, and reproducible quantification of unlearning efficacy; comprehensive benchmarking of mainstream unlearning methods; improved hyperparameter selection; and the discovery of several novel strategies that enhance practical unlearning effectiveness.
📝 Abstract
The imperative to eliminate undesirable data memorization underscores the significance of machine unlearning for large language models (LLMs). Recent research has introduced a series of promising unlearning methods, notably boosting the practical significance of the field. Nevertheless, adopting a proper evaluation framework to reflect the true unlearning efficacy is also essential yet has not received adequate attention. This paper seeks to refine the evaluation of LLM unlearning by addressing two key challenges -- a) the robustness of evaluation metrics and b) the trade-offs between competing goals. The first challenge stems from findings that current metrics are susceptible to various red teaming scenarios. It indicates that they may not reflect the true extent of knowledge retained by LLMs but rather tend to mirror superficial model behaviors, thus prone to attacks. We address this issue by devising and assessing a series of candidate metrics, selecting the most robust ones under various types of attacks. The second challenge arises from the conflicting goals of eliminating unwanted knowledge while retaining those of others. This trade-off between unlearning and retention often fails to conform the Pareto frontier, rendering it subtle to compare the efficacy between methods that excel only in either unlearning or retention. We handle this issue by proposing a calibration method that can restore the original performance on non-targeted data after unlearning, thereby allowing us to focus exclusively on assessing the strength of unlearning. Our evaluation framework notably enhances the effectiveness when assessing and comparing various LLM unlearning methods, further allowing us to benchmark existing works, identify their proper hyper-parameters, and explore new tricks to enhance their practical efficacy.