time2time: Causal Intervention in Hidden States to Simulate Rare Events in Time Series Foundation Models

📅 2025-09-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether time-series Transformer foundation models internally represent semantic concepts—such as market regimes—and whether such representations enable causal manipulation of rare, high-risk events—e.g., market crashes. We propose *activation transplantation*, a novel intervention method that replaces statistical moments of hidden states during forward propagation to achieve targeted, semantics-aware perturbation of event representations. Experiments reveal a graded, interpretable representation of event severity in the model’s latent space, enabling controllable forecasting and regulation of market turbulence versus stability. We validate both the existence and intervenability of this latent semantic space across Toto and Chronos architectures. Our approach establishes a new, interpretable, causally grounded paradigm for stress testing and risk-scenario generation using foundation models—moving beyond black-box prediction toward semantically aware, counterfactual control.

Technology Category

Application Category

📝 Abstract
While transformer-based foundation models excel at forecasting routine patterns, two questions remain: do they internalize semantic concepts such as market regimes, or merely fit curves? And can their internal representations be leveraged to simulate rare, high-stakes events such as market crashes? To investigate this, we introduce activation transplantation, a causal intervention that manipulates hidden states by imposing the statistical moments of one event (e.g., a historical crash) onto another (e.g., a calm period) during the forward pass. This procedure deterministically steers forecasts: injecting crash semantics induces downturn predictions, while injecting calm semantics suppresses crashes and restores stability. Beyond binary control, we find that models encode a graded notion of event severity, with the latent vector norm directly correlating with the magnitude of systemic shocks. Validated across two architecturally distinct TSFMs, Toto (decoder only) and Chronos (encoder-decoder), our results demonstrate that steerable, semantically grounded representations are a robust property of large time series transformers. Our findings provide evidence for a latent concept space that governs model predictions, shifting interpretability from post-hoc attribution to direct causal intervention, and enabling semantic "what-if" analysis for strategic stress-testing.
Problem

Research questions and friction points this paper is trying to address.

Simulate rare events in time series foundation models
Determine if models internalize semantic concepts like market regimes
Leverage internal representations to predict high-stakes scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal intervention via activation transplantation
Manipulate hidden states using statistical moments
Steer forecasts deterministically with semantic injection
🔎 Similar Papers
No similar papers found.
Debdeep Sanyal
Debdeep Sanyal
Undergraduate Student
Large Language ModelsReasoningUnlearningPlanningReinforcement Learning
A
Aaryan Nagpal
Birla AI Labs
D
Dhruv Kumar
Birla AI Labs, BITS Pilani
M
Murari Mandal
Birla AI Labs, KIIT Bhubaneswar
S
Saurabh Deshpande
Birla AI Labs