🤖 AI Summary
This work addresses the lack of effective evaluation benchmarks for intelligent agents in long-horizon, repetitive computer operation tasks, which hinders their deployment in real-world office environments. To bridge this gap, we introduce OS-Marathon, the first benchmark comprising 242 long-horizon repetitive tasks spanning two representative office domains. We further propose a few-shot teaching method that generates condensed demonstrations from a small number of examples, enabling efficient abstraction and generalization of workflow logic. Experimental results demonstrate that current state-of-the-art agents perform poorly on such tasks, whereas our approach significantly improves both task completion rates and execution efficiency on large-scale unseen tasks.
📝 Abstract
Long-horizon, repetitive workflows are common in professional settings, such as processing expense reports from receipts and entering student grades from exam papers. These tasks are often tedious for humans since they can extend to extreme lengths proportional to the size of the data to process. However, they are ideal for Computer-Use Agents (CUAs) due to their structured, recurring sub-workflows with logic that can be systematically learned. Identifying the absence of an evaluation benchmark as a primary bottleneck, we establish OS-Marathon, comprising 242 long-horizon, repetitive tasks across 2 domains to evaluate state-of-the-art (SOTA) agents. We then introduce a cost-effective method to construct a condensed demonstration using only few-shot examples to teach agents the underlying workflow logic, enabling them to execute similar workflows effectively on larger, unseen data collections. Extensive experiments demonstrate both the inherent challenges of these tasks and the effectiveness of our proposed method. Project website: https://os-marathon.github.io/.