XStreamVGGT: Extremely Memory-Efficient Streaming Vision Geometry Grounded Transformer with KV Cache Compression

📅 2026-01-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unbounded growth of key-value (KV) cache in StreamVGGT for streaming 3D reconstruction, which leads to excessive memory consumption and latency. To mitigate this issue, the authors propose a fine-tuning-free joint pruning and quantization compression framework. By analyzing multi-view redundancy, the method efficiently estimates token importance and designs a quantization strategy tailored to the distribution characteristics of KV tensors, enabling precise pruning and compression under the causal attention mechanism. Experimental results demonstrate that the proposed approach reduces memory usage by 4.42× and accelerates inference by 5.48×, with negligible degradation in reconstruction quality, thereby significantly enhancing the scalability and practicality of streaming 3D reconstruction systems.

Technology Category

Application Category

📝 Abstract
Learning-based 3D visual geometry models have benefited substantially from large-scale transformers. Among these, StreamVGGT leverages frame-wise causal attention for strong streaming reconstruction, but suffers from unbounded KV cache growth, leading to escalating memory consumption and inference latency as input frames accumulate. We propose XStreamVGGT, a tuning-free approach that systematically compresses the KV cache through joint pruning and quantization, enabling extremely memory-efficient streaming inference. Specifically, redundant KVs originating from multi-view inputs are pruned through efficient token importance identification, enabling a fixed memory budget. Leveraging the unique distribution of KV tensors, we incorporate KV quantization to further reduce memory consumption. Extensive evaluations show that XStreamVGGT achieves mostly negligible performance degradation while substantially reducing memory usage by 4.42$\times$ and accelerating inference by 5.48$\times$, enabling scalable and practical streaming 3D applications. The code is available at https://github.com/ywh187/XStreamVGGT/.
Problem

Research questions and friction points this paper is trying to address.

KV cache growth
memory efficiency
streaming 3D reconstruction
inference latency
vision geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

KV cache compression
streaming 3D reconstruction
memory-efficient transformer
token pruning
KV quantization
🔎 Similar Papers
No similar papers found.
Z
Zunhai Su
Shenzhen International Graduate School, Tsinghua University
W
Weihao Ye
Institute of Artificial Intelligence, Xiamen University
Hansen Feng
Hansen Feng
Beijing Institute of Technology
denoisingsuper-resolutionimage restorationImage and video processingComputational
K
Keyu Fan
Shenzhen International Graduate School, Tsinghua University
Jing Zhang
Jing Zhang
East China University of Science and Technology
computer visionimage understanding
Dahai Yu
Dahai Yu
Florida State University
Uncertainty Quantification
Zhengwu Liu
Zhengwu Liu
The University of Hong Kong (HKU) / Tsinghua University (THU)
brain machine interfacescomputing in memorymemristor
N
Ngai Wong
Department of Electrical and Electronic Engineering, The University of Hong Kong