🤖 AI Summary
This work addresses the high memory overhead and latency of KV caching that hinder large vision-language models in long-horizon GUI interaction tasks. The authors propose ST-Lite, a training-free framework that, for the first time, reveals uniformly high sparsity in attention across all layers in GUI-related tasks. Leveraging this insight, they design a dual-branch scoring strategy to dynamically model spatiotemporal dependencies. Specifically, Component-Centric Spatial Saliency (CSS) and Trajectory-Aware Semantic Gating (TSG) are introduced to effectively preserve critical UI structures while pruning redundant historical tokens. With only 10–20% of the original cache budget, ST-Lite achieves a 2.45× decoding speedup while matching or even surpassing the performance of full-cache baselines.
📝 Abstract
Large Vision-Language Models (VLMs) have emerged as powerful engines for autonomous GUI agents, yet their deployment is severely constrained by the substantial memory footprint and latency of the Key-Value (KV) cache during long-horizon interactions. While existing cache compression methods have proven effective for LLMs, we empirically demonstrate that they suffer from suboptimal performance in GUI scenarios due to a fundamental misalignment: unlike general visual tasks where attention sparsity varies across layers, GUI attention patterns exhibit uniform high-sparsity across all transformer layers. Motivated by this insight, we propose ST-Lite, a training-free KV cache compression framework tailored for efficient GUI agents that explicitly addresses the dynamic spatio-trajectory dependencies within GUI data streams. ST-Lite introduces a novel dual-branch scoring policy incorporating Component-centric Spatial Saliency (CSS) and Trajectory-aware Semantic Gating (TSG). Specifically, CSS preserves the structural integrity of interactive UI elements by evaluating local neighborhood saliency, while TSG mitigates historical redundancy by dynamically filtering visually repetitive KV pairs within the interaction trajectory. Extensive evaluations demonstrate that with only a 10-20% cache budget, ST-Lite achieves a 2.45x decoding acceleration while maintaining comparable or even superior performance compared to full-cache baselines, offering a scalable solution for resource-constrained GUI agents.