🤖 AI Summary
To address the challenge of reliably tracing the provenance of large language models (LLMs) for intellectual property protection, this paper proposes a training-free weight fingerprinting method. The approach operates directly on model weight matrices and integrates linear assignment problem (LAP)-based alignment with unbiased centered kernel alignment (CKA) to robustly identify derivation relationships among base models. Compared to existing techniques, it significantly enhances resilience against diverse post-training modifications—including pruning, quantization, and LoRA fine-tuning—while eliminating false positives entirely. Experimental evaluation on 60 positive and 90 negative model pairs achieves perfect classification performance (100% accuracy, precision, recall, and F1-score), with each pairwise comparison completed in just 30 seconds. The method thus delivers exceptional accuracy, robustness, and computational efficiency, establishing a practical foundation for LLM provenance verification.
📝 Abstract
Protecting the intellectual property of large language models (LLMs) is crucial, given the substantial resources required for their training. Consequently, there is an urgent need for both model owners and third parties to determine whether a suspect LLM is trained from scratch or derived from an existing base model. However, the intensive post-training processes that models typically undergo-such as supervised fine-tuning, extensive continued pretraining, reinforcement learning, multi-modal extension, pruning, and upcycling-pose significant challenges to reliable identification. In this work, we propose a training-free fingerprinting method based on weight matrices. We leverage the Linear Assignment Problem (LAP) and an unbiased Centered Kernel Alignment (CKA) similarity to neutralize the effects of parameter manipulations, yielding a highly robust and high-fidelity similarity metric. On a comprehensive testbed of 60 positive and 90 negative model pairs, our method demonstrates exceptional robustness against all six aforementioned post-training categories while exhibiting a near-zero risk of false positives. By achieving perfect scores on all classification metrics, our approach establishes a strong basis for reliable model lineage verification. Moreover, the entire computation completes within 30s on an NVIDIA 3090 GPU. The code is available at https://github.com/LUMIA-Group/AWM.