STAR-1: Safer Alignment of Reasoning LLMs with 1K Data

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) struggle to achieve safety alignment under extremely limited data—e.g., only 1K samples. Method: This paper introduces STAR-1, the first high-quality, small-scale safety alignment dataset specifically designed for LRMs. Its core innovation is a tripartite data construction paradigm: (i) multi-source instruction integration, (ii) strategy-guided cautious reasoning generation, and (iii) fine-grained automated filtering via a GPT-4o–based safety scoring system. Contribution/Results: After supervised fine-tuning on STAR-1, LRMs achieve an average 40% improvement in safety performance across four major safety benchmarks, while suffering only a 1.1% average degradation in performance on five key reasoning tasks. This represents the first empirical validation that safety alignment and reasoning capability can be jointly optimized even under extreme data scarcity—significantly outperforming existing approaches.

Technology Category

Application Category

📝 Abstract
This paper introduces STAR-1, a high-quality, just-1k-scale safety dataset specifically designed for large reasoning models (LRMs) like DeepSeek-R1. Built on three core principles -- diversity, deliberative reasoning, and rigorous filtering -- STAR-1 aims to address the critical needs for safety alignment in LRMs. Specifically, we begin by integrating existing open-source safety datasets from diverse sources. Then, we curate safety policies to generate policy-grounded deliberative reasoning samples. Lastly, we apply a GPT-4o-based safety scoring system to select training examples aligned with best practices. Experimental results show that fine-tuning LRMs with STAR-1 leads to an average 40% improvement in safety performance across four benchmarks, while only incurring a marginal decrease (e.g., an average of 1.1%) in reasoning ability measured across five reasoning tasks. Extensive ablation studies further validate the importance of our design principles in constructing STAR-1 and analyze its efficacy across both LRMs and traditional LLMs. Our project page is https://ucsc-vlaa.github.io/STAR-1.
Problem

Research questions and friction points this paper is trying to address.

Enhancing safety alignment in reasoning LLMs with minimal data
Improving safety performance without compromising reasoning ability
Validating dataset design principles for effective safety alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse open-source safety datasets integration
Policy-grounded deliberative reasoning samples curation
GPT-4o-based safety scoring system application
🔎 Similar Papers
No similar papers found.