🤖 AI Summary
Large language models (LLMs) frequently suffer from hallucination—generating factually incorrect or unsupported outputs—due to insufficient awareness of their own knowledge boundaries. To address this, we propose a lightweight, explanation-guided hallucination suppression method based on interpretable activation steering. Our approach employs contrastive learning and amortized optimization to fine-tune only a small, LoRA-augmented submodule within a single Transformer layer, embedding explicit “refusal capability” directly into the model’s weights. It is architecture-agnostic, supporting both dense and Mixture-of-Experts (MoE) models. This work presents the first parameter-efficient, deployment-flexible hallucination mitigation technique effective across both text-only and multimodal settings. Experiments show a 30–40% reduction in hallucination rates on multiple short-answer benchmarks, a 30× decrease in computational overhead, a 20× reduction in training data requirements, and strong out-of-distribution generalization.
📝 Abstract
Large Language Models (LLMs) exhibit impressive capabilities but often hallucinate, confidently providing incorrect answers instead of admitting ignorance. Prior work has shown that models encode linear representations of their own knowledge and that activation steering can reduce hallucinations. These approaches, however, require real-time monitoring and intervention during inference. We introduce Contrastive Activation Steering for Amortized Learning (CASAL), an efficient algorithm that connects interpretability with amortized optimization. CASAL directly bakes the benefits of activation steering into model's weights. Once trained, LLMs answer questions they know while abstaining from answering those they do not. CASAL's light-weight design requires training only a submodule of a single transformer layer and yet reduces hallucination by 30%-40% across multiple short-form QA benchmarks. CASAL is 30x more compute-efficient and 20x more data-efficient than strong LoRA-based baselines such as SFT and DPO, boosting its practical applicability in data scarce domains. Importantly, CASAL also generalizes effectively to out-of-distribution (OOD) domains. We showcase CASAL's flexibility in mitigating hallucinations in both text-only and vision-language models. To our knowledge, CASAL is the first steering-based training method that has been shown to be effective for both dense and Mixture-of-Experts (MoE) models. CASAL represents a promising step forward for applying interpretability-inspired method for practical deployment in production systems.