🤖 AI Summary
This work addresses the critical gap in evaluating knowledge unlearning methods for large language models, where conventional metrics fail to detect whether supposedly forgotten information can be recovered through sophisticated prompting. To this end, the authors propose REBEL, the first adversarial prompt generation framework that integrates evolutionary strategies into unlearning evaluation. REBEL automatically evolves highly effective prompts to systematically probe the recoverability of “forgotten” knowledge. Experiments on the TOFU and WMDP benchmarks demonstrate that REBEL achieves attack success rates of 60% and 93%, respectively, against mainstream unlearning algorithms—substantially outperforming static baselines. These results reveal that current unlearning approaches merely suppress knowledge superficially rather than erasing it, exposing significant security vulnerabilities and highlighting the limitations of existing evaluation paradigms.
📝 Abstract
Machine unlearning for LLMs aims to remove sensitive or copyrighted data from trained models. However, the true efficacy of current unlearning methods remains uncertain. Standard evaluation metrics rely on benign queries that often mistake superficial information suppression for genuine knowledge removal. Such metrics fail to detect residual knowledge that more sophisticated prompting strategies could still extract. We introduce REBEL, an evolutionary approach for adversarial prompt generation designed to probe whether unlearned data can still be recovered. Our experiments demonstrate that REBEL successfully elicits ``forgotten''knowledge from models that seemed to be forgotten in standard unlearning benchmarks, revealing that current unlearning methods may provide only a superficial layer of protection. We validate our framework on subsets of the TOFU and WMDP benchmarks, evaluating performance across a diverse suite of unlearning algorithms. Our experiments show that REBEL consistently outperforms static baselines, recovering ``forgotten''knowledge with Attack Success Rates (ASRs) reaching up to 60% on TOFU and 93% on WMDP. We will make all code publicly available upon acceptance. Code is available at https://github.com/patryk-rybak/REBEL/