🤖 AI Summary
To address context inflation, loss of visual fidelity, and degraded generalization in GUI agents—caused by text-based memory representations when operating on unfamiliar interfaces or executing long-horizon tasks—this paper proposes a scalable continuous memory mechanism. Methodologically, we employ a vision-language model (VLM) as a memory encoder, fine-tuning its Q-Former with LoRA (1.2% parameter count) to compress interaction trajectories into fixed-length continuous vectors; these are integrated end-to-end as learnable embeddings at the backbone’s input layer. An automated scaling data flywheel jointly expands memory capacity and retrieval depth. Trained on >100K trajectories (~$4,000), our Qwen-2.5-VL-7B augmented with continuous memory achieves significantly improved task success rates on real-world GUI benchmarks. Its long-horizon reasoning and out-of-distribution performance rival those of state-of-the-art closed-source models, including GPT-4o and Claude-3.
📝 Abstract
We study how to endow GUI agents with scalable memory that help generalize across unfamiliar interfaces and long-horizon tasks. Prior GUI agents compress past trajectories into text tokens, which balloons context length and misses decisive visual cues (e.g., exact widget size and position). We propose a continuous memory that encodes each GUI trajectory into a fixed-length sequence of continuous embeddings using the VLM itself as an encoder; these embeddings are plugged directly into the backbone's input layer, sharply reducing context cost while preserving fine-grained visual information. As memory size and retrieval depth increase, performance improves monotonically, unlike text memories that degrade with long prompts. To grow memory at low cost, we introduce an auto-scaling data flywheel that (i) discovers new environments via search, (ii) synthesizes tasks with an open-source VLM, (iii) rolls out trajectories with the agent, and (iv) verifies success with the same VLM. Using this pipeline, we collect 100k+ trajectories for about $4000 and fine-tune only the memory encoder (LoRA on a Q-Former, 1.2% parameters) with 1,500 samples. On real-world GUI benchmarks, our memory-augmented agent consistently improves success rates under long horizons and distribution shifts. Notably, Qwen-2.5-VL-7B + continuous memory achieves performance comparable to state-of-the-art closed-source models (e.g., GPT-4o, Claude-4).