STAR-S: Improving Safety Alignment through Self-Taught Reasoning on Safety Rules

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models to jailbreaking attacks by proposing STAR-S, a novel framework that introduces self-taught reasoning for safety alignment. STAR-S leverages predefined safety rules to guide the model in generating compliant reasoning trajectories and incorporates a self-reflection mechanism to filter high-quality synthetic data, thereby establishing an iterative fine-tuning loop that operates without human annotation. Evaluated across multiple jailbreaking benchmarks, STAR-S substantially outperforms existing approaches, demonstrating enhanced capabilities in safe reasoning and improved alignment with safety objectives.

Technology Category

Application Category

📝 Abstract
Defending against jailbreak attacks is crucial for the safe deployment of Large Language Models (LLMs). Recent research has attempted to improve safety by training models to reason over safety rules before responding. However, a key issue lies in determining what form of safety reasoning effectively defends against jailbreak attacks, which is difficult to explicitly design or directly obtain. To address this, we propose \textbf{STAR-S} (\textbf{S}elf-\textbf{TA}ught \textbf{R}easoning based on \textbf{S}afety rules), a framework that integrates the learning of safety rule reasoning into a self-taught loop. The core of STAR-S involves eliciting reasoning and reflection guided by safety rules, then leveraging fine-tuning to enhance safety reasoning. Repeating this process creates a synergistic cycle. Improvements in the model's reasoning and interpretation of safety rules allow it to produce better reasoning data under safety rule prompts, which is then utilized for further training. Experiments show that STAR-S effectively defends against jailbreak attacks, outperforming baselines. Code is available at: https://github.com/pikepokenew/STAR_S.git.
Problem

Research questions and friction points this paper is trying to address.

jailbreak attacks
safety alignment
Large Language Models
safety rules
reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-taught reasoning
safety alignment
jailbreak defense
safety rules
iterative fine-tuning
🔎 Similar Papers
No similar papers found.