Priors in Time: Missing Inductive Biases for Language Model Interpretability

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sparse autoencoders (SAEs) assume concept representations are temporally stationary, yet language model activations exhibit strong non-stationarity—e.g., dynamically expanding concept dimensions, context-dependent correlations, and abrupt shifts at event boundaries—limiting interpretability. To address this, we propose a temporally aware feature decomposition framework that disentangles representations into a “predictable component” (encoding slow-varying abstract information) and a “residual component” (capturing fast-varying novel information), incorporating Bayesian modeling and temporal inductive biases inspired by computational neuroscience. Our Temporal Feature Parser significantly outperforms standard SAEs on garden-path sentence parsing and event boundary detection. It is the first method to systematically uncover the evolution of concept dimensions and the emergence of dynamic representational structure, establishing a new paradigm for interpretability research aligned with the intrinsic temporal nature of language.

Technology Category

Application Category

📝 Abstract
Recovering meaningful concepts from language model activations is a central aim of interpretability. While existing feature extraction methods aim to identify concepts that are independent directions, it is unclear if this assumption can capture the rich temporal structure of language. Specifically, via a Bayesian lens, we demonstrate that Sparse Autoencoders (SAEs) impose priors that assume independence of concepts across time, implying stationarity. Meanwhile, language model representations exhibit rich temporal dynamics, including systematic growth in conceptual dimensionality, context-dependent correlations, and pronounced non-stationarity, in direct conflict with the priors of SAEs. Taking inspiration from computational neuroscience, we introduce a new interpretability objective -- Temporal Feature Analysis -- which possesses a temporal inductive bias to decompose representations at a given time into two parts: a predictable component, which can be inferred from the context, and a residual component, which captures novel information unexplained by the context. Temporal Feature Analyzers correctly parse garden path sentences, identify event boundaries, and more broadly delineate abstract, slow-moving information from novel, fast-moving information, while existing SAEs show significant pitfalls in all the above tasks. Overall, our results underscore the need for inductive biases that match the data in designing robust interpretability tools.
Problem

Research questions and friction points this paper is trying to address.

Analyzing temporal dynamics in language model representations
Addressing limitations of independence assumptions in feature extraction
Developing interpretability tools with temporal inductive biases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Temporal Feature Analysis for interpretability
Decomposes representations into predictable and residual components
Uses temporal inductive bias to capture language dynamics