Latent Concept Disentanglement in Transformer-based Language Models

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
It remains unclear whether large Transformer models genuinely learn latent conceptual structures during in-context learning (ICL), or instead rely on superficial heuristics—especially in multi-step reasoning tasks. Method: We propose a mechanistic interpretability framework integrating geometric analysis of representation spaces, controllable task construction to isolate implicit concepts, and layer-wise activation decomposition. Contribution/Results: We discover, for the first time, highly localized low-dimensional subspaces within the model that geometrically mirror the parameterization of continuous latent concepts. We empirically validate a stepwise concept composition mechanism: in discrete two-hop reasoning, we precisely identify and recompose latent concepts; in continuous parametric tasks, we localize structure-preserving, disentangled low-dimensional subspaces. These findings significantly enhance the interpretability and controllability of ICL, providing direct evidence that Transformers encode and manipulate abstract conceptual structures—not merely surface patterns.

Technology Category

Application Category

📝 Abstract
When large language models (LLMs) use in-context learning (ICL) to solve a new task, they seem to grasp not only the goal of the task but also core, latent concepts in the demonstration examples. This begs the question of whether transformers represent latent structures as part of their computation or whether they take shortcuts to solve the problem. Prior mechanistic work on ICL does not address this question because it does not sufficiently examine the relationship between the learned representation and the latent concept, and the considered problem settings often involve only single-step reasoning. In this work, we examine how transformers disentangle and use latent concepts. We show that in 2-hop reasoning tasks with a latent, discrete concept, the model successfully identifies the latent concept and does step-by-step concept composition. In tasks parameterized by a continuous latent concept, we find low-dimensional subspaces in the representation space where the geometry mimics the underlying parameterization. Together, these results refine our understanding of ICL and the representation of transformers, and they provide evidence for highly localized structures in the model that disentangle latent concepts in ICL tasks.
Problem

Research questions and friction points this paper is trying to address.

Examine how transformers disentangle latent concepts in tasks
Investigate step-by-step concept composition in 2-hop reasoning tasks
Analyze low-dimensional subspaces for continuous latent concepts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformers disentangle latent concepts in tasks
Low-dimensional subspaces mimic continuous latent concepts
Step-by-step concept composition in reasoning tasks
🔎 Similar Papers
No similar papers found.