🤖 AI Summary
This work addresses the high unpredictability of cognitive behaviors in large language models—shaped by prompts, layers, and context—which hinders effective diagnosis and control. To tackle this, the authors propose CBMAS, a novel framework that extends cognitive bias analysis into continuous intervention trajectories. By constructing steering vectors, performing dense α-scanning, generating logit lens bias curves, and conducting layer-wise sensitivity analyses, CBMAS reveals nonlinear relationships between intervention strength and model behavior. The approach identifies critical thresholds at which behavioral flips occur, thereby establishing an interpretable link between high-level cognitive phenomena and underlying representational dynamics. The study further contributes an open-source CLI tool and a diverse set of cognitive behavior datasets to support reproducibility and future research.
📝 Abstract
Large language models (LLMs) often encode cognitive behaviors unpredictably across prompts, layers, and contexts, making them difficult to diagnose and control. We present CBMAS, a diagnostic framework for continuous activation steering, which extends cognitive bias analysis from discrete before/after interventions to interpretable trajectories. By combining steering vector construction with dense {\alpha}-sweeps, logit lens-based bias curves, and layer-site sensitivity analysis, our approach can reveal tipping points where small intervention strengths flip model behavior and show how steering effects evolve across layer depth. We argue that these continuous diagnostics offer a bridge between high-level behavioral evaluation and low-level representational dynamics, contributing to the cognitive interpretability of LLMs. Lastly, we provide a CLI and datasets for various cognitive behaviors at the project repository, https://github.com/shimamooo/CBMAS.