CogSteer: Cognition-Inspired Selective Layer Intervention for Efficiently Steering Large Language Models

📅 2024-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from poor interpretability and limited controllability, hindering precise behavioral regulation. Method: Drawing on cognitive science, this work introduces human eye-tracking data—previously unexplored in LLM analysis—to uncover hierarchical correlations between hidden-layer representations and cognitive metrics (e.g., fixation duration, regression count). Based on these findings, we propose a cognition-driven paradigm for automatic selection of optimal intervention layers and design an implicit layer-wise contrastive intervention mechanism to suppress toxic outputs during inference. Our approach integrates eye-movement modeling, inter-layer representation correlation analysis, and parameter-efficient fine-tuning (LoRA/Adapter), requiring intervention in only 1–3 layers. Results: Evaluated on GPT-2, LLaMA2-7B, and Mixtral-7B, the method significantly improves NLU, reasoning, and generation performance; reduces GPU memory overhead by >60%; decreases toxicity by >45%; and demonstrates model-agnostic applicability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) achieve remarkable performance through pretraining on extensive data. This enables efficient adaptation to diverse downstream tasks. However, the lack of interpretability in their underlying mechanisms limits the ability to effectively steer LLMs for specific applications. In this work, we investigate the intrinsic mechanisms of LLMs from a cognitive perspective using eye movement measures. Specifically, we analyze the layer-wise correlation between human cognitive indicators and LLM representations. Building on these insights, we propose a heuristic approach for selecting the optimal steering layer to modulate LLM semantics. To this end, we introduce an efficient selective layer intervention based on prominent parameter-efficient fine-tuning methods, which conventionally adjust either all layers or only the final layer. Additionally, we present an implicit layer contrastive intervention during inference to steer LLMs away from toxic outputs. Extensive experiments on natural language understanding, reasoning, and generation tasks, conducted on GPT-2, LLaMa2-7B, and Mixtral-7B, demonstrate the effectiveness and efficiency of our approach. As a model-agnostic framework, it enhances the interpretability of LLMs while improving efficiency for safe deployment.
Problem

Research questions and friction points this paper is trying to address.

Interpretability of LLMs
Selective layer intervention
Steering LLMs safely
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cognitive-inspired layer intervention
Selective parameter-efficient fine-tuning
Implicit layer contrastive steering
🔎 Similar Papers
No similar papers found.
X
Xintong Wang
Department of Informatics, Universität Hamburg
Jingheng Pan
Jingheng Pan
University of Hamburg
LVLMLLM
L
Liang Ding
The University of Sydney
L
Longqin Jiang
Department of Informatics, Universität Hamburg
Xingshan Li
Xingshan Li
Institute of Psychology, Chinese Academy of Sciences
C
Christian Biemann
Department of Informatics, Universität Hamburg