Continual GUI Agents

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of GUI agents under dynamic changes—such as interface domain shifts or resolution variations—caused by data distribution drift. To tackle this challenge, the study introduces the first continual learning benchmark tailored for GUI agents and proposes GUI-Anchoring in Flux (GUI-AiF), a reinforcement-based fine-tuning framework. GUI-AiF incorporates two novel reward mechanisms, APR-iF and ARR-iF, which dynamically anchor interaction points and regions to reduce reliance on static spatial cues prevalent in existing approaches. Experimental results demonstrate that GUI-AiF significantly outperforms state-of-the-art baselines, confirming the effectiveness and robustness of reinforcement-guided fine-tuning in continual GUI interaction scenarios.

Technology Category

Application Category

📝 Abstract
As digital environments (data distribution) are in flux, with new GUI data arriving over time-introducing new domains or resolutions-agents trained on static environments deteriorate in performance. In this work, we introduce Continual GUI Agents, a new task that requires GUI agents to perform continual learning under shifted domains and resolutions. We find existing methods fail to maintain stable grounding as GUI distributions shift over time, due to the diversity of UI interaction points and regions in fluxing scenarios. To address this, we introduce GUI-Anchoring in Flux (GUI-AiF), a new reinforcement fine-tuning framework that stabilizes continual learning through two novel rewards: Anchoring Point Reward in Flux (APR-iF) and Anchoring Region Reward in Flux (ARR-iF). These rewards guide the agents to align with shifting interaction points and regions, mitigating the tendency of existing reward strategies to over-adapt to static grounding cues (e.g., fixed coordinates or element scales). Extensive experiments show GUI-AiF surpasses state-of-the-art baselines. Our work establishes the first continual learning framework for GUI agents, revealing the untapped potential of reinforcement fine-tuning for continual GUI Agents.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
GUI Agents
Domain Shift
Resolution Change
Stable Grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual Learning
GUI Agents
Reinforcement Fine-tuning
Anchoring Reward
Domain Shift