ContextFocus: Activation Steering for Contextual Faithfulness in Large Language Models

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tendency of large language models to rely on internal memorized knowledge over conflicting but relevant external retrieval context, often resulting in unfaithful outputs. To mitigate this issue, the authors propose a lightweight activation steering method that intervenes in the activation signals of key transformer layers. Without requiring fine-tuning or incurring significant inference overhead, this approach substantially enhances contextual faithfulness while preserving generation fluency. The method is compatible with existing prompting strategies and scales effectively to large models. Evaluated on the ConFiQA benchmark, it significantly outperforms strong baselines such as ContextDPO and COIECD, demonstrating its effectiveness, robustness, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) encode vast amounts of parametric knowledge during pre-training. As world knowledge evolves, effective deployment increasingly depends on their ability to faithfully follow externally retrieved context. When such evidence conflicts with the model's internal knowledge, LLMs often default to memorized facts, producing unfaithful outputs. In this work, we introduce ContextFocus, a lightweight activation steering approach that improves context faithfulness in such knowledge-conflict settings while preserving fluency and efficiency. Unlike prior approaches, our solution requires no model finetuning and incurs minimal inference-time overhead, making it highly efficient. We evaluate ContextFocus on the ConFiQA benchmark, comparing it against strong baselines including ContextDPO, COIECD, and prompting-based methods. Furthermore, we show that our method is complementary to prompting strategies and remains effective on larger models. Extensive experiments show that ContextFocus significantly improves contextual-faithfulness. Our results highlight the effectiveness, robustness, and efficiency of ContextFocus in improving contextual-faithfulness of LLM outputs.
Problem

Research questions and friction points this paper is trying to address.

contextual faithfulness
knowledge conflict
large language models
retrieval-augmented generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

activation steering
contextual faithfulness
knowledge conflict
parameter-free intervention
retrieval-augmented generation
🔎 Similar Papers
No similar papers found.
Nikhil Anand
Nikhil Anand
Sr. Research Scientist, Kempner Institute @ Harvard
Machine LearningTheoretical Physics
S
Shwetha Somasundaram
Adobe Research, India
A
Anirudh Phukan
Indian Institute of Science (IISc), Bengaluru
A
Apoorv Saxena
Inception Labs
Koyel Mukherjee
Koyel Mukherjee
Adobe Research
AlgorithmsDeep LearningOptimizationOnline learning