EditMF: Drawing an Invisible Fingerprint for Your Large Language Models

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor stealthiness and high computational overhead of backdoor-based intellectual property (IP) protection methods for large language models (LLMs), this paper proposes a **training-free and fine-tuning-free fingerprint embedding scheme**. The method employs human-crafted, semantically coherent knowledge triples as fingerprint carriers, integrates causal tracing to identify critical layers, zero-space parameter updates, and a black-box query verification mechanism—enabling ownership watermarking without altering model weights. Experiments on LLaMA and Qwen series models demonstrate that the embedded fingerprints are virtually imperceptible and incur negligible performance degradation. In terms of robustness, the method matches supervised fine-tuning (SFT) and significantly outperforms LoRA-based baselines. To the best of our knowledge, this is the first work achieving a unified design of high stealthiness, zero training cost, and efficient black-box verification for LLM IP protection.

Technology Category

Application Category

📝 Abstract
Training large language models (LLMs) is resource-intensive and expensive, making protecting intellectual property (IP) for LLMs crucial. Recently, embedding fingerprints into LLMs has emerged as a prevalent method for establishing model ownership. However, existing back-door-based methods suffer from limited stealth and efficiency. To simultaneously address these issues, we propose EditMF, a training-free fingerprinting paradigm that achieves highly imperceptible fingerprint embedding with minimal computational overhead. Ownership bits are mapped to compact, semantically coherent triples drawn from an encrypted artificial knowledge base (e.g., virtual author-novel-protagonist facts). Causal tracing localizes the minimal set of layers influencing each triple, and a zero-space update injects the fingerprint without perturbing unrelated knowledge. Verification requires only a single black-box query and succeeds when the model returns the exact pre-embedded protagonist. Empirical results on LLaMA and Qwen families show that EditMF combines high imperceptibility with negligible model's performance loss, while delivering robustness far beyond LoRA-based fingerprinting and approaching that of SFT embeddings. Extensive experiments demonstrate that EditMF is an effective and low-overhead solution for secure LLM ownership verification.
Problem

Research questions and friction points this paper is trying to address.

Protect intellectual property for resource-intensive LLMs
Improve stealth and efficiency of fingerprint embedding
Enable secure ownership verification with minimal performance loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free fingerprinting with minimal overhead
Encrypted artificial knowledge base for ownership
Zero-space update for imperceptible embedding
🔎 Similar Papers
No similar papers found.
Jiaxuan Wu
Jiaxuan Wu
中国农业大学
信息隐藏,大模型安全
Yinghan Zhou
Yinghan Zhou
China Agricultral University
W
Wanli Peng
College of Information and Electrical Engineering, China Agricultural University
Yiming Xue
Yiming Xue
CAU
data hidingsignal processing
J
Juan Wen
College of Information and Electrical Engineering, China Agricultural University
Ping Zhong
Ping Zhong
University of Houston