🤖 AI Summary
Detecting stealthy backdoor attacks during large language model (LLM) training remains challenging due to the opacity of the training process and lack of real-time, step-level auditability.
Method: This paper proposes Proof-of-Training Steps—a novel protocol enabling step-level verifiable auditing of LLM training. It detects backdoor injection in real time by analyzing sensitivity shifts in the LM-head’s response to input perturbations, integrated with a training hash chain and lightweight output-difference detection—requiring no retraining, full dataset access, or model parameter disclosure.
Contribution/Results: First to establish a step-level verifiable training framework that balances security, efficiency, and audit independence. It enables early (in-training) detection—three times faster than training itself—and reduces attack success rate significantly even when 10% of training samples contain triggers. The approach effectively mitigates insider threats and strengthens trustworthiness and accountability in LLM development.
📝 Abstract
As Large Language Models (LLMs) gain traction across critical domains, ensuring secure and trustworthy training processes has become a major concern. Backdoor attacks, where malicious actors inject hidden triggers into training data, are particularly insidious and difficult to detect. Existing post-training verification solutions like Proof-of-Learning are impractical for LLMs due to their requirement for full retraining, lack of robustness against stealthy manipulations, and inability to provide early detection during training. Early detection would significantly reduce computational costs. To address these limitations, we introduce Proof-of-Training Steps, a verification protocol that enables an independent auditor (Alice) to confirm that an LLM developer (Bob) has followed the declared training recipe, including data batches, architecture, and hyperparameters. By analyzing the sensitivity of the LLMs' language modeling head (LM-Head) to input perturbations, our method can expose subtle backdoor injections or deviations in training. Even with backdoor triggers in up to 10 percent of the training data, our protocol significantly reduces the attacker's ability to achieve a high attack success rate (ASR). Our method enables early detection of attacks at the injection step, with verification steps being 3x faster than training steps. Our results highlight the protocol's potential to enhance the accountability and security of LLM development, especially against insider threats.