Steering Video Diffusion Transformers with Massive Activations

πŸ“… 2026-03-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes Structured Activation Steering (STAS), a training-free, self-guided method that leverages the structured distribution of massive activations in video diffusion Transformers. By uncovering, for the first time, the hierarchical spatiotemporal patterns of massive activations within temporally chunked latent spaces, STAS selectively modulates activation values at critical positions to significantly enhance both visual quality and temporal coherence of generated videos with minimal computational overhead. Extensive experiments demonstrate consistent improvements across diverse text-to-video diffusion models, highlighting STAS’s efficiency, robustness, and broad applicability without requiring model retraining or architectural modifications.

Technology Category

Application Category

πŸ“ Abstract
Despite rapid progress in video diffusion transformers, how their internal model signals can be leveraged with minimal overhead to enhance video generation quality remains underexplored. In this work, we study the role of Massive Activations (MAs), which are rare, high-magnitude hidden state spikes in video diffusion transformers. We observed that MAs emerge consistently across all visual tokens, with a clear magnitude hierarchy: first-frame tokens exhibit the largest MA magnitudes, latent-frame boundary tokens (the head and tail portions of each temporal chunk in the latent space) show elevated but slightly lower MA magnitudes than the first frame, and interior tokens within each latent frame remain elevated, yet are comparatively moderate in magnitude. This structured pattern suggests that the model implicitly prioritizes token positions aligned with the temporal chunking in the latent space. Based on this observation, we propose Structured Activation Steering (STAS), a training-free self-guidance-like method that steers MA values at first-frame and boundary tokens toward a scaled global maximum reference magnitude. STAS achieves consistent improvements in terms of video quality and temporal coherence across different text-to-video models, while introducing negligible computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Video Diffusion Transformers
Massive Activations
Video Generation Quality
Temporal Coherence
Model Internal Signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Massive Activations
Structured Activation Steering
Video Diffusion Transformers
Temporal Coherence
Training-free Guidance