XStreamVGGT: Extremely Memory-Efficient Streaming Vision Geometry Grounded Transformer with KV Cache Compression

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the memory and latency bottlenecks in StreamVGGT for long-horizon streaming 3D reconstruction, caused by unbounded growth of key-value (KV) cache. We propose a tuning-free KV cache compression method that, for the first time, seamlessly integrates visual token importance–based pruning with dimension-adaptive quantization within the KV cache management of streaming Vision Transformers. Operating under a fixed memory budget, our approach enables ultra-low-overhead streaming inference while remaining compatible with causal attention and FlashAttention. It achieves a 4.42× reduction in memory consumption and a 5.48× speedup in inference latency, with negligible performance degradation, thereby significantly enhancing the practicality and scalability of long-horizon 3D reconstruction.

Technology Category

Application Category

📝 Abstract
Learning-based 3D visual geometry models have significantly advanced with the advent of large-scale transformers. Among these, StreamVGGT leverages frame-wise causal attention to deliver robust and efficient streaming 3D reconstruction. However, it suffers from unbounded growth in the Key-Value (KV) cache due to the massive influx of vision tokens from multi-image and long-video inputs, leading to increased memory consumption and inference latency as input frames accumulate. This ultimately limits its scalability for long-horizon applications. To address this gap, we propose XStreamVGGT, a tuning-free approach that seamlessly integrates pruning and quantization to systematically compress the KV cache, enabling extremely memory-efficient streaming inference. Specifically, redundant KVs generated from multi-frame inputs are initially pruned to conform to a fixed KV memory budget using an efficient token-importance identification mechanism that maintains full compatibility with high-performance attention kernels (e.g., FlashAttention). Additionally, leveraging the inherent distribution patterns of KV tensors, we apply dimension-adaptive KV quantization within the pruning pipeline to further minimize memory overhead while preserving numerical accuracy. Extensive evaluations show that XStreamVGGT achieves mostly negligible performance degradation while substantially reducing memory usage by 4.42$\times$ and accelerating inference by 5.48$\times$, enabling practical and scalable streaming 3D applications. The code is available at https://github.com/ywh187/XStreamVGGT/.
Problem

Research questions and friction points this paper is trying to address.

KV cache
memory efficiency
streaming 3D reconstruction
transformer
inference latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

KV cache compression
streaming 3D reconstruction
memory-efficient transformer
pruning and quantization
causal attention
🔎 Similar Papers
No similar papers found.
Z
Zunhai Su
Shenzhen International Graduate School, Tsinghua University
W
Weihao Ye
Institute of Artificial Intelligence, Xiamen University
Hansen Feng
Hansen Feng
Beijing Institute of Technology
denoisingsuper-resolutionimage restorationImage and video processingComputational
K
Keyu Fan
Shenzhen International Graduate School, Tsinghua University
Jing Zhang
Jing Zhang
East China University of Science and Technology
computer visionimage understanding
Dahai Yu
Dahai Yu
Florida State University
Uncertainty Quantification
Zhengwu Liu
Zhengwu Liu
The University of Hong Kong (HKU) / Tsinghua University (THU)
brain machine interfacescomputing in memorymemristor
N
Ngai Wong
Department of Electrical and Electronic Engineering, The University of Hong Kong