🤖 AI Summary
Current fingerprinting techniques for large language model (LLM) copyright protection suffer from insufficient reliability, primarily due to the diversity of model modification methods and the absence of standardized evaluation protocols.
Method: This work introduces LeaFBench—the first systematic taxonomy and standardized benchmark for LLM fingerprinting—comprising 149 model instances and 13 prevalent post-training techniques. It supports both white-box and black-box fingerprint analysis and uniformly evaluates robustness against parameterized (e.g., fine-tuning, quantization) and non-parameterized (e.g., RAG, system prompting) modifications.
Contribution/Results: Extensive experiments expose critical performance boundaries and failure modes of existing fingerprinting methods under realistic deployment scenarios, clarifying key open challenges. The benchmark and associated code are fully open-sourced, providing a reproducible evaluation platform and practical guidelines for LLM copyright auditing.
📝 Abstract
The broad capabilities and substantial resources required to train Large Language Models (LLMs) make them valuable intellectual property, yet they remain vulnerable to copyright infringement, such as unauthorized use and model theft. LLM fingerprinting, a non-intrusive technique that extracts and compares the distinctive features from LLMs to identify infringements, offers a promising solution to copyright auditing. However, its reliability remains uncertain due to the prevalence of diverse model modifications and the lack of standardized evaluation. In this SoK, we present the first comprehensive study of LLM fingerprinting. We introduce a unified framework and formal taxonomy that categorizes existing methods into white-box and black-box approaches, providing a structured overview of the state of the art. We further propose LeaFBench, the first systematic benchmark for evaluating LLM fingerprinting under realistic deployment scenarios. Built upon mainstream foundation models and comprising 149 distinct model instances, LeaFBench integrates 13 representative post-development techniques, spanning both parameter-altering methods (e.g., fine-tuning, quantization) and parameter-independent mechanisms (e.g., system prompts, RAG). Extensive experiments on LeaFBench reveal the strengths and weaknesses of existing methods, thereby outlining future research directions and critical open problems in this emerging field. The code is available at https://github.com/shaoshuo-ss/LeaFBench.