🤖 AI Summary
Existing watermarking techniques for large language models (LLMs) fail under end-to-end adversarial settings where attackers fully control the inference process, rendering ownership verification ineffective against collusion-based fingerprint removal and response tampering attacks.
Method: We propose the first robust fingerprinting scheme tailored to such end-to-end threats, integrating encrypted fingerprint embedding, an external collaborative module, error-correcting codes, and a similarity-driven verification mechanism—jointly injecting a unique, verifiable, yet tamper-resilient identifier both within and outside the model.
Results: Evaluated on 12 mainstream LLMs, our approach achieves 100% fingerprint detection success against over ten strong adversarial attacks—including scenarios where baseline methods completely fail (e.g., aggressive fingerprint erasure and response manipulation). It significantly enhances the practicality and security of LLM intellectual property protection.
📝 Abstract
Given the high cost of large language model (LLM) training from scratch, safeguarding LLM intellectual property (IP) has become increasingly crucial. As the standard paradigm for IP ownership verification, LLM fingerprinting thus plays a vital role in addressing this challenge. Existing LLM fingerprinting methods verify ownership by extracting or injecting model-specific features. However, they overlook potential attacks during the verification process, leaving them ineffective when the model thief fully controls the LLM's inference process. In such settings, attackers may share prompt-response pairs to enable fingerprint unlearning or manipulate outputs to evade exact-match verification. We propose iSeal, the first fingerprinting method designed for reliable verification when the model thief controls the suspected LLM in an end-to-end manner. It injects unique features into both the model and an external module, reinforced by an error-correction mechanism and a similarity-based verification strategy. These components are resistant to verification-time attacks, including collusion-based fingerprint unlearning and response manipulation, backed by both theoretical analysis and empirical results. iSeal achieves 100 percent Fingerprint Success Rate (FSR) on 12 LLMs against more than 10 attacks, while baselines fail under unlearning and response manipulations.