Dynamic Retriever for In-Context Knowledge Editing via Policy Optimization

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing contextual knowledge editing methods rely on static, surface-level similar demonstration examples, suffering from two key bottlenecks: an imbalanced trade-off between demonstration quantity and quality, and poor adaptability to varying task difficulty. This paper proposes DR-IKE—a gradient-free, dynamic retrieval-based contextual editing framework. DR-IKE employs a policy-optimized BERT retriever to dynamically select highly relevant demonstrations and introduces a learnable threshold to prune low-value samples. Jointly optimizing forward inference, it controls prompt length to simultaneously enhance editing accuracy and inference efficiency in black-box API settings. Evaluated on the COUNTERFACT benchmark, DR-IKE achieves up to a 17.1% improvement in edit success rate and a 41.6% reduction in latency, without compromising original task accuracy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at factual recall yet still propagate stale or incorrect knowledge. In-context knowledge editing offers a gradient-free remedy suitable for black-box APIs, but current editors rely on static demonstration sets chosen by surface-level similarity, leading to two persistent obstacles: (i) a quantity-quality trade-off, and (ii) lack of adaptivity to task difficulty. We address these issues by dynamically selecting supporting demonstrations according to their utility for the edit. We propose Dynamic Retriever for In-Context Knowledge Editing (DR-IKE), a lightweight framework that (1) trains a BERT retriever with REINFORCE to rank demonstrations by editing reward, and (2) employs a learnable threshold to prune low-value examples, shortening the prompt when the edit is easy and expanding it when the task is hard. DR-IKE performs editing without modifying model weights, relying solely on forward passes for compatibility with black-box LLMs. On the COUNTERFACT benchmark, it improves edit success by up to 17.1%, reduces latency by 41.6%, and preserves accuracy on unrelated queries, demonstrating scalable and adaptive knowledge editing. The code is available at https://github.com/mwnafee/DR-IKE .
Problem

Research questions and friction points this paper is trying to address.

Addresses static demonstration limitations in knowledge editing
Dynamically selects demonstrations based on editing utility
Enables gradient-free editing for black-box language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamically selects demonstrations using reinforcement learning
Employs learnable threshold to prune low-value examples
Performs editing without modifying model weights
🔎 Similar Papers
No similar papers found.
M
Mahmud Wasif Nafee
Rensselaer Polytechnic Institute, Troy, NY , USA
M
Maiqi Jiang
College of William & Mary, Williamsburg, V A, USA
Haipeng Chen
Haipeng Chen
Assistant professor of data science, William & Mary
Reinforcement learningGenerative AIHealthAI for social good
Yanfu Zhang
Yanfu Zhang
William&Mary