Iris: Breaking GUI Complexity with Adaptive Focus and Self-Refining

📅 2024-12-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low perceptual efficiency and weak coordination between object localization and description in complex GUI environments, this paper proposes a lightweight visual agent framework tailored for middle-school-friendly human-computer interaction. Methodologically: (1) we introduce Information-Sensitive Cropping (ISC), a novel dynamic cropping strategy that leverages edge detection to focus on high-information-density regions; (2) we design a Self-Refining Dual-Learning (SRDL) mechanism, enabling joint training of referring expression grounding and object localization via bidirectional feedback—eliminating the need for additional annotations. Built upon multimodal large language models, our framework integrates adaptive cropping with dual-path vision-language co-optimization. Trained on only 850K GUI annotations, it achieves state-of-the-art performance across multiple benchmarks—outperforming baselines trained on ten times more data—and significantly improves success rates on web and OS automation tasks.

Technology Category

Application Category

📝 Abstract
Digital agents are increasingly employed to automate tasks in interactive digital environments such as web pages, software applications, and operating systems. While text-based agents built on Large Language Models (LLMs) often require frequent updates due to platform-specific APIs, visual agents leveraging Multimodal Large Language Models (MLLMs) offer enhanced adaptability by interacting directly with Graphical User Interfaces (GUIs). However, these agents face significant challenges in visual perception, particularly when handling high-resolution, visually complex digital environments. This paper introduces Iris, a foundational visual agent that addresses these challenges through two key innovations: Information-Sensitive Cropping (ISC) and Self-Refining Dual Learning (SRDL). ISC dynamically identifies and prioritizes visually dense regions using a edge detection algorithm, enabling efficient processing by allocating more computational resources to areas with higher information density. SRDL enhances the agent's ability to handle complex tasks by leveraging a dual-learning loop, where improvements in referring (describing UI elements) reinforce grounding (locating elements) and vice versa, all without requiring additional annotated data. Empirical evaluations demonstrate that Iris achieves state-of-the-art performance across multiple benchmarks with only 850K GUI annotations, outperforming methods using 10x more training data. These improvements further translate to significant gains in both web and OS agent downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Simplified User Interface
Visual Assistant Optimization
Efficiency Enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Digital Assistant
Focus and Learning Capability
Efficient Performance with Limited Training Data
🔎 Similar Papers