🤖 AI Summary
Existing model distillation fingerprinting methods often struggle to balance generation quality and fingerprint strength, frequently sacrificing model utility to ensure detectability. This work proposes a fingerprint embedding mechanism that explicitly aligns with the learning dynamics of the student model. By leveraging a proxy model and a gradient-based anti-distillation sampling framework, the method adaptively selects tokens that maximize fingerprint detectability without relying on heuristic perturbations. The approach maintains high detection performance and low utility loss even when the student architecture is unknown. Evaluated on the GSM8K and OASST1 benchmarks, it significantly outperforms current state-of-the-art techniques, achieving near-zero degradation in generation quality while substantially enhancing distillation detection confidence.
📝 Abstract
Model distillation enables efficient emulation of frontier large language models (LLMs), creating a need for robust mechanisms to detect when a third-party student model has trained on a teacher model's outputs. However, existing fingerprinting techniques that could be used to detect such distillation rely on heuristic perturbations that impose a steep trade-off between generation quality and fingerprinting strength, often requiring significant degradation of utility to ensure the fingerprint is effectively internalized by the student. We introduce antidistillation fingerprinting (ADFP), a principled approach that aligns the fingerprinting objective with the student's learning dynamics. Building upon the gradient-based framework of antidistillation sampling, ADFP utilizes a proxy model to identify and sample tokens that directly maximize the expected detectability of the fingerprint in the student after fine-tuning, rather than relying on the incidental absorption of the un-targeted biases of a more naive watermark. Experiments on GSM8K and OASST1 benchmarks demonstrate that ADFP achieves a significant Pareto improvement over state-of-the-art baselines, yielding stronger detection confidence with minimal impact on utility, even when the student model's architecture is unknown.