🤖 AI Summary
This work addresses the challenge of intellectual property protection for large language models amidst the proliferation of unauthorized derivatives by proposing a model provenance method based on refusal behavior. By analyzing directional differences in internal representations during safety alignment, the approach extracts refusal vectors as behavioral fingerprints. It further introduces the first publicly verifiable and privacy-preserving fingerprint transformation framework by integrating locality-sensitive hashing with zero-knowledge proofs. Empirical evaluation across 76 derivative models demonstrates 100% accuracy in identifying the base model family, with the fingerprint exhibiting strong robustness against common post-training modifications such as fine-tuning, merging, and quantization.
📝 Abstract
Protecting the intellectual property of large language models (LLMs) is a critical challenge due to the proliferation of unauthorized derivative models. We introduce a novel fingerprinting framework that leverages the behavioral patterns induced by safety alignment, applying the concept of refusal vectors for LLM provenance tracking. These vectors, extracted from directional patterns in a model's internal representations when processing harmful versus harmless prompts, serve as robust behavioral fingerprints. Our contribution lies in developing a fingerprinting system around this concept and conducting extensive validation of its effectiveness for IP protection. We demonstrate that these behavioral fingerprints are highly robust against common modifications, including finetunes, merges, and quantization. Our experiments show that the fingerprint is unique to each model family, with low cosine similarity between independently trained models. In a large-scale identification task across 76 offspring models, our method achieves 100\% accuracy in identifying the correct base model family. Furthermore, we analyze the fingerprint's behavior under alignment-breaking attacks, finding that while performance degrades significantly, detectable traces remain. Finally, we propose a theoretical framework to transform this private fingerprint into a publicly verifiable, privacy-preserving artifact using locality-sensitive hashing and zero-knowledge proofs.