🤖 AI Summary
Large language models (LLMs) suffer from poor interpretability and limited controllability, hindering precise behavioral regulation. Method: Drawing on cognitive science, this work introduces human eye-tracking data—previously unexplored in LLM analysis—to uncover hierarchical correlations between hidden-layer representations and cognitive metrics (e.g., fixation duration, regression count). Based on these findings, we propose a cognition-driven paradigm for automatic selection of optimal intervention layers and design an implicit layer-wise contrastive intervention mechanism to suppress toxic outputs during inference. Our approach integrates eye-movement modeling, inter-layer representation correlation analysis, and parameter-efficient fine-tuning (LoRA/Adapter), requiring intervention in only 1–3 layers. Results: Evaluated on GPT-2, LLaMA2-7B, and Mixtral-7B, the method significantly improves NLU, reasoning, and generation performance; reduces GPU memory overhead by >60%; decreases toxicity by >45%; and demonstrates model-agnostic applicability.
📝 Abstract
Large Language Models (LLMs) achieve remarkable performance through pretraining on extensive data. This enables efficient adaptation to diverse downstream tasks. However, the lack of interpretability in their underlying mechanisms limits the ability to effectively steer LLMs for specific applications. In this work, we investigate the intrinsic mechanisms of LLMs from a cognitive perspective using eye movement measures. Specifically, we analyze the layer-wise correlation between human cognitive indicators and LLM representations. Building on these insights, we propose a heuristic approach for selecting the optimal steering layer to modulate LLM semantics. To this end, we introduce an efficient selective layer intervention based on prominent parameter-efficient fine-tuning methods, which conventionally adjust either all layers or only the final layer. Additionally, we present an implicit layer contrastive intervention during inference to steer LLMs away from toxic outputs. Extensive experiments on natural language understanding, reasoning, and generation tasks, conducted on GPT-2, LLaMa2-7B, and Mixtral-7B, demonstrate the effectiveness and efficiency of our approach. As a model-agnostic framework, it enhances the interpretability of LLMs while improving efficiency for safe deployment.