Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of delayed security updates, high customization costs, and slow vulnerability response in large language models (LLMs), this paper proposes a lightweight, patch-like security remediation method. Instead of full-parameter fine-tuning, it injects a learnable prefix into the model’s input space and leverages a reference model to guide targeted mitigation of toxicity, bias, and harmful outputs. The approach introduces only 0.003% additional parameters while achieving safety performance comparable to state-of-the-art safety-aligned models across multiple dimensions. It supports cross-version stacking, modular policy composition, and client-specific deployment, with negligible inference overhead. This work pioneers the adaptation of the software patching paradigm to LLM security mechanisms—enhancing agility, scalability, and practical deployability of security updates without compromising model integrity or efficiency.

Technology Category

Application Category

📝 Abstract
We propose patching for large language models (LLMs) like software versions, a lightweight and modular approach for addressing safety vulnerabilities. While vendors release improved LLM versions, major releases are costly, infrequent, and difficult to tailor to customer needs, leaving released models with known safety gaps. Unlike full-model fine-tuning or major version updates, our method enables rapid remediation by prepending a compact, learnable prefix to an existing model. This"patch"introduces only 0.003% additional parameters, yet reliably steers model behavior toward that of a safer reference model. Across three critical domains (toxicity mitigation, bias reduction, and harmfulness refusal) policy patches achieve safety improvements comparable to next-generation safety-aligned models while preserving fluency. Our results demonstrate that LLMs can be"patched"much like software, offering vendors and practitioners a practical mechanism for distributing scalable, efficient, and composable safety updates between major model releases.
Problem

Research questions and friction points this paper is trying to address.

Addressing safety vulnerabilities in large language models through lightweight patching
Enabling rapid safety updates without costly full-model retraining
Improving toxicity, bias and harmfulness refusal while preserving model fluency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Patching LLMs like software versions
Prepending compact learnable prefix to model
Introducing minimal parameters for safety improvements