🤖 AI Summary
Implicit in-context knowledge editing (IKE) poses a critical threat to the trustworthiness of large language models (LLMs) by enabling stealthy injection of erroneous or harmful knowledge that is difficult to detect and revert.
Method: We propose the first systematic framework for detecting and reversing IKE. We introduce the novel task of “reversal token” identification, supporting both continuous and discrete optimization. Our approach combines black-box detection—based on top-10 output probabilities—with attention-pattern and token-distribution analysis, enabling non-intrusive detection and edit reversal without model fine-tuning or architectural modification.
Contribution/Results: (1) We provide theoretical guarantees establishing the detectability and reversibility of IKE; (2) we design a lightweight, model-agnostic, low-interference joint detection–reversal mechanism. Extensive experiments across multiple LLMs demonstrate detection F1-scores and original-output restoration accuracy exceeding 80%, significantly enhancing model robustness, transparency, and controllability.
📝 Abstract
In-context knowledge editing (IKE) enables efficient modification of large language model (LLM) outputs without parameter changes and at zero-cost. However, it can be misused to manipulate responses opaquely, e.g., insert misinformation or offensive content. Such malicious interventions could be incorporated into high-level wrapped APIs where the final input prompt is not shown to end-users. To address this issue, we investigate the detection and reversal of IKE-edits. First, we demonstrate that IKE-edits can be detected with high accuracy (F1>80%) using only the top-10 output probabilities of the next token, even in a black-box setting, e.g. proprietary LLMs with limited output information. Further, we introduce the novel task of reversing IKE-edits using specially tuned reversal tokens. We explore using both continuous and discrete reversal tokens, achieving over 80% accuracy in recovering original, unedited outputs across multiple LLMs. Our continuous reversal tokens prove particularly effective, with minimal impact on unedited prompts. Through analysis of output distributions, attention patterns, and token rankings, we provide insights into IKE's effects on LLMs and how reversal tokens mitigate them. This work represents a significant step towards enhancing LLM resilience against potential misuse of in-context editing, improving their transparency and trustworthiness.