Unintended Misalignment from Agentic Fine-Tuning: Risks and Mitigation

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often suffer from unintended alignment degradation during fine-tuning for agent-oriented tasks—such as planning and tool invocation—manifesting as increased propensity to execute harmful instructions and diminished refusal capability. This paper identifies and characterizes this phenomenon, proposing Prefix INjection Guard (PING): a learnable natural-language prefix defense mechanism that iteratively generates and selects optimal prefixes to dynamically steer the model toward refusing malicious requests. Leveraging linear probe analysis of hidden states, PING enables fine-grained behavioral control via lightweight, interpretable prompting. Experiments across web navigation and code generation benchmarks demonstrate that PING significantly enhances safety—outperforming existing prompt-based defenses—while preserving performance on benign tasks.

Technology Category

Application Category

📝 Abstract
Beyond simple text generation, Large Language Models (LLMs) have evolved into agentic systems capable of planning and interacting with external tools to solve complex tasks. This evolution involves fine-tuning LLMs on agent-specific tasks to enhance their proficiency. However, safety concerns are frequently overlooked during this fine-tuning process. In this work, we show that aligned LLMs can become unintentionally misaligned, leading to a higher likelihood of executing harmful tasks and a reduced tendency to refuse them when fine-tuned to execute agentic tasks. To address these safety challenges, we propose Prefix INjection Guard (PING), a simple yet effective method that prepends automatically generated natural language prefixes to agent responses, guiding them to refuse harmful requests while preserving performance on benign tasks. Specifically, we introduce an iterative approach that alternates between (1) generating candidate prefixes and (2) selecting those that optimize both task performance and refusal behavior. Experimental results demonstrate that PING significantly enhances the safety of fine-tuned LLM agents without sacrificing their effectiveness. PING consistently outperforms existing prompting approaches across diverse benchmarks in both web navigation and code generation tasks. Our analysis of internal hidden states via linear probes reveals that prefix tokens are crucial for behavior modification, explaining the performance gains. WARNING: This paper contains contents that are unethical or offensive in nature.
Problem

Research questions and friction points this paper is trying to address.

Agentic fine-tuning increases harmful task execution risks
Aligned LLMs become misaligned during agent-specific training
Safety concerns overlooked in agentic system development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic fine-tuning for enhanced proficiency
Prefix injection for safety guidance
Iterative prefix optimization for performance