See-Control: A Multimodal Agent Framework for Smartphone Interaction with a Robotic Arm

πŸ“… 2025-12-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing MLLM-driven smartphone operation agents heavily rely on Android Debug Bridge (ADB), resulting in platform dependency and impractical deployment in real-world home environments. Method: We propose See-Controlβ€”the first platform-agnostic, multimodal embodied agent framework for Embodied Smartphone Operation (ESO). It employs a low-degree-of-freedom robotic arm to perform purely physical, system-agnostic interactions with smartphones of any brand or OS, eliminating reliance on ADB or system-level access. We formally define the ESO task, introduce the first ESO benchmark comprising 155 diverse tasks, and release an operation-segment dataset with fine-grained action annotations. Furthermore, we design a vision-action joint modeling approach enabling MLLMs to generate executable, physics-grounded manipulation commands. Results: Experiments demonstrate robust cross-platform operability across heterogeneous devices. We fully open-source the benchmark, dataset, and codebase, establishing a scalable foundation for household robots to perform smartphone-related tasks.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have enabled their use as intelligent agents for smartphone operation. However, existing methods depend on the Android Debug Bridge (ADB) for data transmission and action execution, limiting their applicability to Android devices. In this work, we introduce the novel Embodied Smartphone Operation (ESO) task and present See-Control, a framework that enables smartphone operation via direct physical interaction with a low-DoF robotic arm, offering a platform-agnostic solution. See-Control comprises three key components: (1) an ESO benchmark with 155 tasks and corresponding evaluation metrics; (2) an MLLM-based embodied agent that generates robotic control commands without requiring ADB or system back-end access; and (3) a richly annotated dataset of operation episodes, offering valuable resources for future research. By bridging the gap between digital agents and the physical world, See-Control provides a concrete step toward enabling home robots to perform smartphone-dependent tasks in realistic environments.
Problem

Research questions and friction points this paper is trying to address.

Develops a robotic arm framework for smartphone interaction without ADB dependency
Introduces an embodied agent generating control commands via multimodal language models
Creates a benchmark and dataset for platform-agnostic smartphone operation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework uses robotic arm for direct smartphone interaction
MLLM agent generates control commands without ADB access
Platform-agnostic solution includes benchmark and dataset
πŸ”Ž Similar Papers
No similar papers found.