Are Robust LLM Fingerprints Adversarially Robust?

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language model (LLM) fingerprinting techniques exhibit robustness against benign perturbations but lack systematic evaluation of their adversarial robustness against malicious hosts. Method: This work introduces the first adversarial threat model specifically tailored for LLM fingerprinting, identifies fundamental security vulnerabilities, and designs adaptive attack strategies. Leveraging an empirical framework that integrates threat modeling, adversarial attack construction, and utility-preserving validation, we systematically evaluate ten state-of-the-art fingerprinting schemes. Contribution/Results: All ten schemes are fully bypassed without degrading model functionality—demonstrating their intrinsic fragility under adversarial conditions. This study exposes a critical gap in current fingerprinting mechanisms and advocates a paradigm shift from passive defense to *adversarially robust-by-design* model fingerprinting.

Technology Category

Application Category

📝 Abstract
Model fingerprinting has emerged as a promising paradigm for claiming model ownership. However, robustness evaluations of these schemes have mostly focused on benign perturbations such as incremental fine-tuning, model merging, and prompting. Lack of systematic investigations into {em adversarial robustness} against a malicious model host leaves current systems vulnerable. To bridge this gap, we first define a concrete, practical threat model against model fingerprinting. We then take a critical look at existing model fingerprinting schemes to identify their fundamental vulnerabilities. Based on these, we develop adaptive adversarial attacks tailored for each vulnerability, and demonstrate that these can bypass model authentication completely for ten recently proposed fingerprinting schemes while maintaining high utility of the model for the end users. Our work encourages fingerprint designers to adopt adversarial robustness by design. We end with recommendations for future fingerprinting methods.
Problem

Research questions and friction points this paper is trying to address.

Evaluating adversarial robustness of LLM fingerprinting against malicious hosts
Identifying vulnerabilities in existing model ownership verification schemes
Developing adaptive attacks that bypass authentication while maintaining utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defined concrete threat model for fingerprinting
Developed adaptive attacks targeting scheme vulnerabilities
Demonstrated complete authentication bypass for ten schemes