Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories

📅 2024-05-26
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit latent truthfulness awareness yet frequently generate factually inconsistent outputs—manifesting as factual, logical, and existential hallucinations. To address this, we propose a fine-tuning-free, inference-time adaptive activation steering method that, for the first time, formalizes truthfulness as a linearly separable concept within LLM internal representations. Our approach introduces a training-free, multi-granularity, multi-vector steering mechanism with dynamically adjustable intervention strength, leveraging linear decoding and directional offset in activation space to uniformly suppress diverse hallucination types. Evaluated across six open-source models—from LLaMA to LLaMA3 (13B–65B)—our method improves truthfulness by over 30% on average, with gains up to 142%, requiring zero parameter modification and enabling plug-and-play deployment. This advances trustworthy generative AI by establishing a scalable, architecture-agnostic framework for real-time truthfulness calibration.

Technology Category

Application Category

📝 Abstract
Recent studies have indicated that Large Language Models (LLMs) harbor an inherent understanding of truthfulness, yet often fail to consistently express it and generate false statements. This gap between"knowing"and"telling"poses a challenge for ensuring the truthfulness of generated content. Inspired by recent work on the practice of encoding human-interpretable concepts linearly within large language models, we treat truthfulness as a specially linearly encoded concept within LLMs, and introduce Adaptive Activation Steering (ACT), a tuning-free method that adaptively shifts LLM's activations in the"truthful"direction during inference. ACT addresses diverse categories of hallucinations by utilizing diverse truthfulness-related steering vectors and adjusting the steering intensity adaptively. Applied as an add-on across various models, ACT significantly improves truthfulness in LLaMA ($uparrow$ 142%), LLaMA2 ($uparrow$ 24%), Alpaca ($uparrow$ 36%), Vicuna ($uparrow$ 28%), LLaMA2-Chat ($uparrow$ 19%), and LLaMA3($uparrow$ 34%). Furthermore, we verify ACT's scalability across larger models (13B, 33B, 65B), underscoring the adaptability of ACT to large-scale language models. Our code is available at https://github.com/tianlwang/ACT.
Problem

Research questions and friction points this paper is trying to address.

Improves LLM truthfulness without tuning
Addresses diverse hallucination categories
Enhances truthfulness across various LLM models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tuning-free Adaptive Activation Steering
Utilizes diverse truthfulness-related vectors
Scalable across large language models
🔎 Similar Papers
No similar papers found.
Tianlong Wang
Tianlong Wang
Peking University
LLM reasoningRepresentation editing
X
Xianfeng Jiao
Peking University, Beijing, China
Y
Yifan He
Peking University, Beijing, China
Zhongzhi Chen
Zhongzhi Chen
Beihang University, Beijing, China
Yinghao Zhu
Yinghao Zhu
The University of Hong Kong
Data MiningAI for Healthcare
X
Xu Chu
Peking University, Beijing, China
Junyi Gao
Junyi Gao
University of Edinburgh
Data MiningAI for healthcare
Y
Yasha Wang
Peking University, Beijing, China
L
Liantao Ma
Peking University, Beijing, China