🤖 AI Summary
This study addresses the fundamental tension in machine selective forgetting for AI safety: how to excise harmful knowledge—such as dual-use information in cybersecurity or CBRN domains—without degrading model functionality, stability, or existing safety mechanisms. Method: Employing an interdisciplinary approach integrating AI safety theory, dual-use risk modeling, behavioral attribution, and counterfactual evaluation, the project systematically identifies structural bottlenecks and proposes the “safety-aware forgetting” conceptual framework, elucidating deep tensions between forgetting and alignment, interpretability, and robustness. Contribution/Results: It identifies seven critical open problems and constructs the first consensus-driven challenge map specifically for safety-oriented forgetting research. The work shifts the objective of AI forgetting from privacy-centric deletion toward a triadic paradigm balancing safety, utility, and controllability—thereby advancing foundational principles for trustworthy, security-aware model evolution.
📝 Abstract
As AI systems become more capable, widely deployed, and increasingly autonomous in critical areas such as cybersecurity, biological research, and healthcare, ensuring their safety and alignment with human values is paramount. Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks, which has been the primary focus of existing research. More recently, its potential application to AI safety has gained attention. In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety, particularly in managing dual-use knowledge in sensitive domains like cybersecurity and chemical, biological, radiological, and nuclear (CBRN) safety. In these contexts, information can be both beneficial and harmful, and models may combine seemingly harmless information for harmful purposes -- unlearning this information could strongly affect beneficial uses. We provide an overview of inherent constraints and open problems, including the broader side effects of unlearning dangerous knowledge, as well as previously unexplored tensions between unlearning and existing safety mechanisms. Finally, we investigate challenges related to evaluation, robustness, and the preservation of safety features during unlearning. By mapping these limitations and open challenges, we aim to guide future research toward realistic applications of unlearning within a broader AI safety framework, acknowledging its limitations and highlighting areas where alternative approaches may be required.