🤖 AI Summary
Existing graphical user interfaces (GUIs) are designed primarily for human users, prioritizing aesthetics and usability—resulting in suboptimal efficiency for computer-using agents (CUAs). Method: This paper introduces an agent-native GUI design paradigm, wherein CUAs serve as evaluators to guide code language models (Codex-style Coder models) in automated GUI generation and refinement. We propose AUI-Gym—a benchmark comprising 1,560 cross-domain real-world tasks—and a visual CUA Dashboard that compresses multi-step interactions into interpretable, task-solvability- and navigation-success-driven feedback. Our framework integrates LLM-based task generation, programmatic validators for execution correctness, Coder-driven interface synthesis/modification, and closed-loop CUA evaluation with iterative refinement. Contribution/Results: Experiments demonstrate significant improvements in GUI functional utility and CUA navigation success rates, advancing CUAs from passive interface consumers to active, collaborative participants in digital environments.
📝 Abstract
Computer-Use Agents (CUA) are becoming increasingly capable of autonomously operating digital environments through Graphical User Interfaces (GUI). Yet, most GUI remain designed primarily for humans--prioritizing aesthetics and usability--forcing agents to adopt human-oriented behaviors that are unnecessary for efficient task execution. At the same time, rapid advances in coding-oriented language models (Coder) have transformed automatic GUI design. This raises a fundamental question: Can CUA as judges to assist Coder for automatic GUI design? To investigate, we introduce AUI-Gym, a benchmark for Automatic GUI development spanning 52 applications across diverse domains. Using language models, we synthesize 1560 tasks that simulate real-world scenarios. To ensure task reliability, we further develop a verifier that programmatically checks whether each task is executable within its environment. Building on this, we propose a Coder-CUA in Collaboration framework: the Coder acts as Designer, generating and revising websites, while the CUA serves as Judge, evaluating functionality and refining designs. Success is measured not by visual appearance, but by task solvability and CUA navigation success rate. To turn CUA feedback into usable guidance, we design a CUA Dashboard that compresses multi-step navigation histories into concise visual summaries, offering interpretable guidance for iterative redesign. By positioning agents as both designers and judges, our framework shifts interface design toward agent-native efficiency and reliability. Our work takes a step toward shifting agents from passive use toward active participation in digital environments. Our code and dataset are available at https://github.com/showlab/AUI.