Learning from Failures: Understanding LLM Alignment through Failure-Aware Inverse RL

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing inverse reinforcement learning (IRL) methods for recovering implicit reward signals of large language models (LLMs) from RLHF data overlook highly informative failure cases—such as misclassified or marginally separated preference pairs—leading to ambiguous, poorly interpretable, and unsafe reward reconstructions. This work proposes a failure-aware IRL framework that, for the first time, systematically leverages failure samples identified by the reward model to guide reward function learning. It integrates adaptive preference modeling with an unsupervised optimization mechanism to substantially reduce inverse ambiguity. Experiments demonstrate that our method outperforms state-of-the-art IRL approaches across multiple alignment evaluation metrics, achieves superior detoxification performance, and enables more efficient re-alignment training—without requiring additional annotations or auxiliary classifiers.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) aligns Large Language Models (LLMs) with human preferences, yet the underlying reward signals they internalize remain hidden, posing a critical challenge for interpretability and safety. Existing approaches attempt to extract these latent incentives using Inverse Reinforcement Learning (IRL), but treat all preference pairs equally, often overlooking the most informative signals: those examples the extracted reward model misclassifies or assigns nearly equal scores, which we term emph{failures}. We introduce a novel emph{failure-aware} IRL algorithm that focuses on misclassified or difficult examples to recover the latent rewards defining model behaviors. By learning from these failures, our failure-aware IRL extracts reward functions that better reflect the true objectives behind RLHF. We demonstrate that failure-aware IRL outperforms existing IRL baselines across multiple metrics when applied to LLM detoxification, without requiring external classifiers or supervision. Crucially, failure-aware IRL yields rewards that better capture the true incentives learned during RLHF, enabling more effective re-RLHF training than standard IRL. This establishes failure-aware IRL as a robust, scalable method for auditing model alignment and reducing ambiguity in the IRL process.
Problem

Research questions and friction points this paper is trying to address.

Extracting latent reward signals from RLHF-aligned LLMs
Focusing on misclassified examples to recover true objectives
Improving interpretability and safety through failure-aware IRL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Failure-aware IRL focuses on misclassified examples
Extracts reward functions reflecting true RLHF objectives
Enables effective model auditing without external supervision
🔎 Similar Papers
No similar papers found.