Shape it Up! Restoring LLM Safety during Finetuning

๐Ÿ“… 2025-05-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Fine-tuning large language models (LLMs) is highly vulnerable to safety alignment collapseโ€”even a few adversarial examples can severely degrade safety. Existing static safety-shaping methods apply coarse-grained, sample-level weighting based solely on overall response safety, ignoring fine-grained, token-level safety dynamics within responses. To address this, we propose Dynamic Safety Shaping (DSS), the first framework enabling token-level, fine-grained safety regulation. DSS introduces Safety Trajectory Assessment of Response (STAR), which repurposes a guardian model to perform segment-wise safety evaluation along the generated response trajectory; it then applies differential weighting to safe versus harmful segments during parameter updates. Evaluated across diverse threat types, multiple benchmark datasets, and distinct LLM families, DSS significantly improves safety robustness without compromising task performance. This work establishes a scalable, interpretable paradigm for safety-aware LLM fine-tuning.

Technology Category

Application Category

๐Ÿ“ Abstract
Finetuning large language models (LLMs) enables user-specific customization but introduces critical safety risks: even a few harmful examples can compromise safety alignment. A common mitigation strategy is to update the model more strongly on examples deemed safe, while downweighting or excluding those flagged as unsafe. However, because safety context can shift within a single example, updating the model equally on both harmful and harmless parts of a response is suboptimal-a coarse treatment we term static safety shaping. In contrast, we propose dynamic safety shaping (DSS), a framework that uses fine-grained safety signals to reinforce learning from safe segments of a response while suppressing unsafe content. To enable such fine-grained control during finetuning, we introduce a key insight: guardrail models, traditionally used for filtering, can be repurposed to evaluate partial responses, tracking how safety risk evolves throughout the response, segment by segment. This leads to the Safety Trajectory Assessment of Response (STAR), a token-level signal that enables shaping to operate dynamically over the training sequence. Building on this, we present STAR-DSS, guided by STAR scores, that robustly mitigates finetuning risks and delivers substantial safety improvements across diverse threats, datasets, and model families-all without compromising capability on intended tasks. We encourage future safety research to build on dynamic shaping principles for stronger mitigation against evolving finetuning risks.
Problem

Research questions and friction points this paper is trying to address.

Mitigating safety risks during LLM finetuning with harmful examples
Addressing coarse static safety shaping in model updates
Enabling fine-grained dynamic safety control via token-level signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic safety shaping for fine-grained control
Guardrail models repurposed for partial response evaluation
Token-level STAR signal enables dynamic training shaping
๐Ÿ”Ž Similar Papers
No similar papers found.