The Challenge of Identifying the Origin of Black-Box Large Language Models

πŸ“… 2025-03-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address copyright infringement and regulatory compliance risks arising from unauthorized fine-tuning and misuse of black-box large language models (LLMs), this paper proposes PlugAEβ€”the first proactive provenance method operating in the continuous embedding space. Unlike conventional fingerprinting techniques that rely on static output features, PlugAE employs black-box API probing to optimize adversarial tokens in the embedding space and inject them into the target model, thereby eliciting distinguishable response patterns that constitute a robust model fingerprint. Our work systematically exposes fundamental limitations of existing provenance methods in resisting fine-tuning attacks. PlugAE is rigorously validated across 30 open-source LLMs and two real-world commercial black-box APIs, significantly improving detection accuracy for fine-tuned derivative models. The results provide a deployable technical foundation for LLM copyright auditing and inform emerging legal and regulatory frameworks governing model provenance.

Technology Category

Application Category

πŸ“ Abstract
The tremendous commercial potential of large language models (LLMs) has heightened concerns about their unauthorized use. Third parties can customize LLMs through fine-tuning and offer only black-box API access, effectively concealing unauthorized usage and complicating external auditing processes. This practice not only exacerbates unfair competition, but also violates licensing agreements. In response, identifying the origin of black-box LLMs is an intrinsic solution to this issue. In this paper, we first reveal the limitations of state-of-the-art passive and proactive identification methods with experiments on 30 LLMs and two real-world black-box APIs. Then, we propose the proactive technique, PlugAE, which optimizes adversarial token embeddings in a continuous space and proactively plugs them into the LLM for tracing and identification. The experiments show that PlugAE can achieve substantial improvement in identifying fine-tuned derivatives. We further advocate for legal frameworks and regulations to better address the challenges posed by the unauthorized use of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Identify origin of black-box large language models.
Address unauthorized use and licensing violations.
Improve methods for tracing fine-tuned LLM derivatives.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes PlugAE for black-box LLM identification
Optimizes adversarial token embeddings in continuous space
Enhances tracing of fine-tuned LLM derivatives
πŸ”Ž Similar Papers
No similar papers found.