π€ AI Summary
This work addresses the vulnerability of large language models to adversarial attacks that elicit unsafe content in high-stakes scenarios, a challenge inadequately mitigated by existing safety mechanisms lacking both theoretical guarantees and practical efficacy. To bridge this gap, the paper introduces control barrier functions (CBFs) into the modelβs latent representation space, enabling the learning of nonlinear safety constraints. During inference, this approach dynamically detects and blocks unsafe response trajectories without modifying model parameters, thereby efficiently integrating diverse safety constraints. Experimental results across multiple models and datasets demonstrate that the proposed method significantly reduces both adversarial attack success rates and the proportion of unsafe generations, outperforming current state-of-the-art approaches while preserving the original modelβs capabilities.
π Abstract
Despite the state-of-the-art performance of large language models (LLMs) across diverse tasks, their susceptibility to adversarial attacks and unsafe content generation remains a major obstacle to deployment, particularly in high-stakes settings. Addressing this challenge requires safety mechanisms that are both practically effective and supported by rigorous theory. We introduce BarrierSteer, a novel framework that formalizes response safety by embedding learned non-linear safety constraints directly into the model's latent representation space. BarrierSteer employs a steering mechanism based on Control Barrier Functions (CBFs) to efficiently detect and prevent unsafe response trajectories during inference with high precision. By enforcing multiple safety constraints through efficient constraint merging, without modifying the underlying LLM parameters, BarrierSteer preserves the model's original capabilities and performance. We provide theoretical results establishing that applying CBFs in latent space offers a principled and computationally efficient approach to enforcing safety. Our experiments across multiple models and datasets show that BarrierSteer substantially reduces adversarial success rates, decreases unsafe generations, and outperforms existing methods.