Refinement Provenance Inference: Detecting LLM-Refined Training Prompts from Model Behavior

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of tracing whether prompts used in fine-tuning large language models (LLMs) have been optimized by another LLM—a critical issue for data provenance and dispute resolution in mixed training corpora. The paper formalizes this as the "Rewritten Prompt Inference" (RPI) task and introduces RePro, a novel framework that leverages logit distribution shifts under teacher forcing, integrating both likelihood and ranking signals. By employing shadow fine-tuning to learn transferable representations, RePro enables universal provenance detection without access to original training data, requiring only a lightweight linear head. Experiments demonstrate that RePro consistently achieves strong performance across diverse model architectures and optimizers, effectively capturing intrinsic, optimizer-agnostic distributional shifts rather than superficial stylistic rewriting patterns.

Technology Category

Application Category

📝 Abstract
Instruction tuning increasingly relies on LLM-based prompt refinement, where prompts in the training corpus are selectively rewritten by an external refiner to improve clarity and instruction alignment. This motivates an instance-level audit problem: for a fine-tuned model and a training prompt-response pair, can we infer whether the model was trained on the original prompt or its LLM-refined version within a mixed corpus? This matters for dataset governance and dispute resolution when training data are contested. However, it is non-trivial in practice: refined and raw instances are interleaved in the training corpus with unknown, source-dependent mixture ratios, making it harder to develop provenance methods that generalize across models and training setups. In this paper, we formalize this audit task as Refinement Provenance Inference (RPI) and show that prompt refinement yields stable, detectable shifts in teacher-forced token distributions, even when semantic differences are not obvious. Building on this phenomenon, we propose RePro, a logit-based provenance framework that fuses teacher-forced likelihood features with logit-ranking signals. During training, RePro learns a transferable representation via shadow fine-tuning, and uses a lightweight linear head to infer provenance on unseen victims without training-data access. Empirically, RePro consistently attains strong performance and transfers well across refiners, suggesting that it exploits refiner-agnostic distribution shifts rather than rewrite-style artifacts.
Problem

Research questions and friction points this paper is trying to address.

Refinement Provenance Inference
LLM-refined prompts
instruction tuning
training data provenance
model auditing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Refinement Provenance Inference
LLM-refined prompts
teacher-forced token distribution
logit-based provenance
shadow fine-tuning
🔎 Similar Papers
No similar papers found.