Detecting Adversarial Fine-tuning with Auditing Agents

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the security threat posed by malicious actors exploiting large language model (LLM) fine-tuning APIs for adversarial fine-tuning on covertly harmful datasets—thereby evading safety guardrails—this paper introduces the first Fine-tuning Audit Agent. The agent jointly analyzes fine-tuning data, model inputs, and behavioral discrepancies before and after fine-tuning to construct an automated, risk-score–based detection mechanism. It effectively identifies stealthy, obfuscated attacks that bypass mainstream safety evaluations. Across 1,400+ independent audits, it achieves a 56.2% detection rate against eight categories of strong adversarial attacks at a 1% false positive rate, substantially outperforming conventional content moderation approaches. Our core contribution is the formal definition and implementation of a fine-grained fine-tuning process auditing paradigm—the first of its kind—providing a scalable, production-deployable defense framework for API-level fine-tuning security.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) providers expose fine-tuning APIs that let end users fine-tune their frontier LLMs. Unfortunately, it has been shown that an adversary with fine-tuning access to an LLM can bypass safeguards. Particularly concerning, such attacks may avoid detection with datasets that are only implicitly harmful. Our work studies robust detection mechanisms for adversarial use of fine-tuning APIs. We introduce the concept of a fine-tuning auditing agent and show it can detect harmful fine-tuning prior to model deployment. We provide our auditing agent with access to the fine-tuning dataset, as well as the fine-tuned and pre-fine-tuned models, and request the agent assigns a risk score for the fine-tuning job. We evaluate our detection approach on a diverse set of eight strong fine-tuning attacks from the literature, along with five benign fine-tuned models, totaling over 1400 independent audits. These attacks are undetectable with basic content moderation on the dataset, highlighting the challenge of the task. With the best set of affordances, our auditing agent achieves a 56.2% detection rate of adversarial fine-tuning at a 1% false positive rate. Most promising, the auditor is able to detect covert cipher attacks that evade safety evaluations and content moderation of the dataset. While benign fine-tuning with unintentional subtle safety degradation remains a challenge, we establish a baseline configuration for further work in this area. We release our auditing agent at https://github.com/safety-research/finetuning-auditor.
Problem

Research questions and friction points this paper is trying to address.

Detecting adversarial fine-tuning that bypasses LLM safeguards
Identifying harmful fine-tuning prior to model deployment
Detecting covert cipher attacks evading safety evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Auditing agent detects harmful fine-tuning before deployment
Agent uses dataset and model comparisons for risk scoring
Detects covert cipher attacks evading safety evaluations
🔎 Similar Papers
No similar papers found.