🤖 AI Summary
Existing app agents rely heavily on large language models or external APIs, incurring substantial computational overhead; alternatively, fine-tuning small models via syntactic imitation—i.e., regenerating action strings—yields poor out-of-distribution (OOD) generalization. Method: This paper proposes Action Semantic Learning (ASL), the first framework to incorporate state-transition semantics from programming languages into mobile app interaction modeling. ASL defines actions by their semantic effect—i.e., observable interface state changes—rather than syntactic form. It introduces a novel Semantic Estimation Engine (SEE) to compute state-difference rewards and integrates them into a reinforcement learning pipeline for training compact models that jointly optimize action generation and semantic alignment, enabling both offline and online mobile execution. Contribution/Results: Evaluated across multiple benchmarks, ASL achieves significant improvements in task accuracy and OOD robustness, demonstrating the effectiveness and practicality of semantics-driven learning for lightweight app agents.
📝 Abstract
The advent of Large Language Models (LLMs) enables the rise of App agents that interpret user intent and operate smartphone Apps through actions such as clicking and scrolling. While prompt-based solutions with closed LLM APIs show promising ability, they incur heavy compute costs and external API dependency. Fine-tuning smaller open-source LLMs solves these limitations. However, current fine-tuning methods use a syntax learning paradigm that forces agents to reproduce exactly the ground truth action strings, leading to out-of-distribution (OOD) vulnerability. To fill this gap, we propose Action Semantics Learning (ASL), a novel learning framework, where the learning objective is capturing the semantics of the ground truth actions. Specifically, inspired by the programming language theory, we define the action semantics for App agents as the state transition induced by the action in the user interface. With this insight, ASL employs a novel SEmantic Estimator (SEE) to compute a semantic reward to train the App agents in generating actions aligned with the semantics of ground truth actions, even when the syntactic forms differ. To support the effectiveness of ASL, we theoretically demonstrate the superior robustness of ASL for the OOD problem compared with the existing syntax learning paradigm. Extensive experiments on offline and online smartphone App operation benchmarks show that ASL significantly improves the accuracy and generalisation of App agents over existing methods.