Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite comparable performance on knowledge-intensive in-context learning (ICL) tasks, the underlying mechanisms of Transformer, state-space (Mamba/Mamba2), and hybrid large language models remain poorly understood. Method: We propose a dual-path analytical framework integrating functional vector probing with targeted intervention to disentangle parameterized knowledge retrieval from contextual understanding. Contribution/Results: We find that functional vectors—encoding task-relevant knowledge—are predominantly localized in self-attention and Mamba layers, whereas Mamba2 exhibits knowledge-type–dependent non-functional-vector mechanisms, indicating its ICL capability does not rely on conventional attention-based representations. This work provides the first systematic evidence of fundamental mechanistic divergence across architectures in ICL. Our findings establish theoretical foundations and methodological tools for cross-architectural interpretability modeling and architecture-aware task adaptation.

Technology Category

Application Category

📝 Abstract
We perform in-depth evaluations of in-context learning (ICL) on state-of-the-art transformer, state-space, and hybrid large language models over two categories of knowledge-based ICL tasks. Using a combination of behavioral probing and intervention-based methods, we have discovered that, while LLMs of different architectures can behave similarly in task performance, their internals could remain different. We discover that function vectors (FVs) responsible for ICL are primarily located in the self-attention and Mamba layers, and speculate that Mamba2 uses a different mechanism from FVs to perform ICL. FVs are more important for ICL involving parametric knowledge retrieval, but not for contextual knowledge understanding. Our work contributes to a more nuanced understanding across architectures and task types. Methodologically, our approach also highlights the importance of combining both behavioural and mechanistic analyses to investigate LLM capabilities.
Problem

Research questions and friction points this paper is trying to address.

Investigating in-context learning mechanisms across transformer and state-space architectures
Identifying function vectors in self-attention and Mamba layers for knowledge retrieval
Comparing behavioral and mechanistic differences in language model architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated state-space and hybrid architectures for in-context learning
Identified function vectors in self-attention and Mamba layers
Combined behavioral probing with mechanistic intervention methods
🔎 Similar Papers
No similar papers found.
Shenran Wang
Shenran Wang
Master of Science, UBC
Machine LearningNLP
T
Timothy Tin-Long Tse
The University of British Columbia
J
Jian Zhu
The University of British Columbia