🤖 AI Summary
This work addresses the vulnerability of existing large language model (LLM) fingerprinting techniques in ensemble settings, where current methods lack robustness and are susceptible to attacks that compromise intellectual property protection. The study presents the first systematic investigation into the feasibility of fingerprint removal attacks within ensemble environments and introduces two novel suppression attack strategies: a token-filtering attack (TFA) that restricts the output vocabulary at each decoding step, and a sentence validation mechanism (SVA) based on perplexity and voting to filter out fingerprinted responses. Experimental results demonstrate that the proposed methods effectively suppress fingerprint activation while preserving the ensemble model’s overall performance, significantly outperforming state-of-the-art attacks and exposing critical weaknesses in current fingerprinting schemes under ensemble deployment scenarios.
📝 Abstract
The widespread adoption of Large Language Model (LLM) in commercial and research settings has intensified the need for robust intellectual property protection. Backdoor-based LLM fingerprinting has emerged as a promising solution for this challenge. In practical application, the low-cost multi-model collaborative technique, LLM ensemble, combines diverse LLMs to leverage their complementary strengths, garnering significant attention and practical adoption. Unfortunately, the vulnerability of existing LLM fingerprinting for the ensemble scenario is unexplored. In order to comprehensively assess the robustness of LLM fingerprinting, in this paper, we propose two novel fingerprinting attack methods: token filter attack (TFA) and sentence verification attack (SVA). The TFA gets the next token from a unified set of tokens created by the token filter mechanism at each decoding step. The SVA filters out fingerprint responses through a sentence verification mechanism based on perplexity and voting. Experimentally, the proposed methods effectively inhibit the fingerprint response while maintaining ensemble performance. Compared with state-of-the-art attack methods, the proposed method can achieve better performance. The findings necessitate enhanced robustness in LLM fingerprinting.