Revisiting Robustness for LLM Safety Alignment via Selective Geometry Control

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models to safety alignment failures under distributional shift and noisy preference supervision, a challenge exacerbated by the neglect of optimization geometry in existing approaches. The authors propose ShaPO, a novel framework that introduces, for the first time, an optimization-geometric perspective into alignment robustness research. By applying hierarchical (token-level and reward-level) selective geometric control within critical parameter subspaces, ShaPO preserves worst-case alignment guarantees without imposing excessive global regularization. The method integrates seamlessly with data-centric robustness strategies and consistently outperforms state-of-the-art preference optimization techniques across diverse safety benchmarks and noisy preference settings, with further performance gains achieved when combined with complementary robust training methods.

Technology Category

Application Category

📝 Abstract
Safety alignment of large language models remains brittle under domain shift and noisy preference supervision. Most existing robust alignment methods focus on uncertainty in alignment data, while overlooking optimization-induced fragility in preference-based objectives. In this work, we revisit robustness for LLM safety alignment from an optimization geometry perspective, and argue that robustness failures cannot be addressed by data-centric methods alone. We propose ShaPO, a geometry-aware preference optimization framework that enforces worst-case alignment objectives via selective geometry control over alignment-critical parameter subspace. By avoiding uniform geometry constraints, ShaPO mitigates the over-regularization that can harm robustness under distribution shift. We instantiate ShaPO at two levels: token-level ShaPO stabilizes likelihood-based surrogate optimization, while reward-level ShaPO enforces reward-consistent optimization under noisy supervision. Across diverse safety benchmarks and noisy preference settings, ShaPO consistently improves safety robustness over popular preference optimization methods. Moreover, ShaPO composes cleanly with data-robust objectives, yielding additional gains and empirically supporting the proposed optimization-geometry perspective.
Problem

Research questions and friction points this paper is trying to address.

LLM safety alignment
robustness
domain shift
noisy preference supervision
optimization-induced fragility
Innovation

Methods, ideas, or system contributions that make the work stand out.

optimization geometry
robust alignment
preference optimization
selective geometry control
LLM safety
🔎 Similar Papers
No similar papers found.