🤖 AI Summary
Current large language models (LLMs) face functional safety risks: their reasoning processes often neglect logical entailments, inadvertently endorsing harmful behaviors. Conventional safety interventions—such as scalar reward modeling, parameter fine-tuning, or heuristic decoding—lack both fine-grained observability and forward-looking control, rendering them ineffective for timely intervention at critical reasoning steps. To address this, we propose a multi-level process alignment framework. Our method introduces: (1) process reward modeling (PRM) to evaluate reasoning paths stepwise; (2) an introspective self-critique mechanism that verifies logical consistency at intermediate reasoning steps; and (3) an adaptive decoding strategy guided by fine-grained safety assessments. Experiments demonstrate substantial improvements in logical integrity and safety across complex reasoning tasks, outperforming state-of-the-art baselines on multiple alignment-sensitive benchmarks.
📝 Abstract
Present day LLMs face the challenge of managing affordance-based safety risks-situations where outputs inadvertently facilitate harmful actions due to overlooked logical implications. Traditional safety solutions, such as scalar outcome-based reward models, parameter tuning, or heuristic decoding strategies, lack the granularity and proactive nature needed to reliably detect and intervene during subtle yet crucial reasoning steps. Addressing this fundamental gap, we introduce AURA, an innovative, multi-layered framework centered around Process Reward Models (PRMs), providing comprehensive, step level evaluations across logical coherence and safety-awareness. Our framework seamlessly combines introspective self-critique, fine-grained PRM assessments, and adaptive safety-aware decoding to dynamically and proactively guide models toward safer reasoning trajectories. Empirical evidence clearly demonstrates that this approach significantly surpasses existing methods, significantly improving the logical integrity and affordance-sensitive safety of model outputs. This research represents a pivotal step toward safer, more responsible, and contextually aware AI, setting a new benchmark for alignment-sensitive applications.