Towards Safeguarding LLM Fine-tuning APIs against Cipher Attacks

📅 2025-08-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) fine-tuning APIs are vulnerable to evasion attacks via encrypted encodings, yet no formal definition or systematic evaluation framework for defending against such threats exists. Method: We first formally define the fine-tuning API defense problem and propose the Cipher Fine-tuning Robustness (CIFR) benchmark—a comprehensive, open-source evaluation suite covering diverse known and unknown cipher variants—to rigorously assess defense robustness and generalization against covert malicious data. Our detection method employs multi-round anomaly detection via internal activation probing, enabling real-time monitoring of representation drift during fine-tuning. Results: Experiments demonstrate >99% detection accuracy—substantially outperforming existing approaches—and strong generalization to unseen cipher families. Both the CIFR benchmark and implementation code are publicly released.

Technology Category

Application Category

📝 Abstract
Large language model fine-tuning APIs enable widespread model customization, yet pose significant safety risks. Recent work shows that adversaries can exploit access to these APIs to bypass model safety mechanisms by encoding harmful content in seemingly harmless fine-tuning data, evading both human monitoring and standard content filters. We formalize the fine-tuning API defense problem, and introduce the Cipher Fine-tuning Robustness benchmark (CIFR), a benchmark for evaluating defense strategies' ability to retain model safety in the face of cipher-enabled attackers while achieving the desired level of fine-tuning functionality. We include diverse cipher encodings and families, with some kept exclusively in the test set to evaluate for generalization across unseen ciphers and cipher families. We then evaluate different defenses on the benchmark and train probe monitors on model internal activations from multiple fine-tunes. We show that probe monitors achieve over 99% detection accuracy, generalize to unseen cipher variants and families, and compare favorably to state-of-the-art monitoring approaches. We open-source CIFR and the code to reproduce our experiments to facilitate further research in this critical area. Code and data are available online https://github.com/JackYoustra/safe-finetuning-api
Problem

Research questions and friction points this paper is trying to address.

Defending LLM fine-tuning APIs against cipher-based attacks
Preventing safety bypass through encoded harmful content
Evaluating defense generalization across unseen cipher families
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probe monitors on internal activations for detection
CIFR benchmark with diverse cipher encodings
Generalization to unseen cipher variants and families
🔎 Similar Papers
No similar papers found.