Intrinsic Fingerprint of LLMs: Continue Training is NOT All You Need to Steal A Model!

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Copyright attribution for large language models (LLMs) remains challenging, and conventional watermarking techniques are vulnerable to removal via continued training. Method: This paper proposes a robust fingerprinting method grounded in intrinsic parameter distributions of LLMs. It identifies, for the first time, that the layer-wise standard deviation distribution of attention weight matrices exhibits stable, unique, and continued-training-resilient patterns—serving as an inherent model fingerprint. By leveraging statistical modeling and cross-model comparison, the method reliably infers model lineage. Contribution/Results: Evaluated across multiple mainstream model families, the approach successfully uncovers critical evidence that Huawei’s Pangu Pro MoE originates from Qwen-2.5 14B, demonstrating that provenance traces persist despite extensive continued training. This work establishes a novel, embedding-free, tamper-proof paradigm for LLM intellectual property protection.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) face significant copyright and intellectual property challenges as the cost of training increases and model reuse becomes prevalent. While watermarking techniques have been proposed to protect model ownership, they may not be robust to continue training and development, posing serious threats to model attribution and copyright protection. This work introduces a simple yet effective approach for robust LLM fingerprinting based on intrinsic model characteristics. We discover that the standard deviation distributions of attention parameter matrices across different layers exhibit distinctive patterns that remain stable even after extensive continued training. These parameter distribution signatures serve as robust fingerprints that can reliably identify model lineage and detect potential copyright infringement. Our experimental validation across multiple model families demonstrates the effectiveness of our method for model authentication. Notably, our investigation uncovers evidence that a recently Pangu Pro MoE model released by Huawei is derived from Qwen-2.5 14B model through upcycling techniques rather than training from scratch, highlighting potential cases of model plagiarism, copyright violation, and information fabrication. These findings underscore the critical importance of developing robust fingerprinting methods for protecting intellectual property in large-scale model development and emphasize that deliberate continued training alone is insufficient to completely obscure model origins.
Problem

Research questions and friction points this paper is trying to address.

Protecting LLM copyright against model reuse threats
Robust fingerprinting for stable model lineage identification
Detecting model plagiarism via intrinsic parameter patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses intrinsic model characteristics for fingerprinting
Analyzes attention parameter matrix std distributions
Detects model lineage via stable distribution signatures
D
Do-hyeon Yoon
Honest AGI Community
M
Minsoo Chun
Honest AGI Community
T
Thomas Allen
Honest AGI Community
H
Hans Müller
Honest AGI Community
M
Min Wang
Honest AGI Community
Rajesh Sharma
Rajesh Sharma
University of Tartu
Computational Social ScienceData ScienceSocial Network AnalysisSocial Computing#unitartucs