🤖 AI Summary
Current LLM safety mechanisms rely predominantly on passive refusal, rendering them inadequate for mitigating vulnerability-related risks posed by non-malicious users—such as individuals seeking help during psychological crises—potentially triggering behavioral escalation or platform abandonment.
Method: We propose Constructive Safety Alignment (CSA), a paradigm shift from rejection-based alignment to user-centered, proactive guidance. CSA integrates game-theoretic modeling to anticipate user responses, fine-grained risk boundary identification, interpretable and controllable reasoning, and instruction tuning.
Contribution/Results: Our model, Oyster-I, achieves near-GPT-5 performance on the Constructive Benchmark and matches GPT-o1’s robustness on the Strata-Sword jailbreaking benchmark, demonstrating strong generalization alongside high safety. For the first time, CSA systematically reconciles dual objectives: preventing misuse while empowering vulnerable users—thereby advancing trustworthy, human-aligned AI deployment.
📝 Abstract
Large language models (LLMs) typically deploy safety mechanisms to prevent harmful content generation. Most current approaches focus narrowly on risks posed by malicious actors, often framing risks as adversarial events and relying on defensive refusals. However, in real-world settings, risks also come from non-malicious users seeking help while under psychological distress (e.g., self-harm intentions). In such cases, the model's response can strongly influence the user's next actions. Simple refusals may lead them to repeat, escalate, or move to unsafe platforms, creating worse outcomes. We introduce Constructive Safety Alignment (CSA), a human-centric paradigm that protects against malicious misuse while actively guiding vulnerable users toward safe and helpful results. Implemented in Oyster-I (Oy1), CSA combines game-theoretic anticipation of user reactions, fine-grained risk boundary discovery, and interpretable reasoning control, turning safety into a trust-building process. Oy1 achieves state-of-the-art safety among open models while retaining high general capabilities. On our Constructive Benchmark, it shows strong constructive engagement, close to GPT-5, and unmatched robustness on the Strata-Sword jailbreak dataset, nearing GPT-o1 levels. By shifting from refusal-first to guidance-first safety, CSA redefines the model-user relationship, aiming for systems that are not just safe, but meaningfully helpful. We release Oy1, code, and the benchmark to support responsible, user-centered AI.