π€ AI Summary
Existing MLLM-driven smartphone operation agents heavily rely on Android Debug Bridge (ADB), resulting in platform dependency and impractical deployment in real-world home environments.
Method: We propose See-Controlβthe first platform-agnostic, multimodal embodied agent framework for Embodied Smartphone Operation (ESO). It employs a low-degree-of-freedom robotic arm to perform purely physical, system-agnostic interactions with smartphones of any brand or OS, eliminating reliance on ADB or system-level access. We formally define the ESO task, introduce the first ESO benchmark comprising 155 diverse tasks, and release an operation-segment dataset with fine-grained action annotations. Furthermore, we design a vision-action joint modeling approach enabling MLLMs to generate executable, physics-grounded manipulation commands.
Results: Experiments demonstrate robust cross-platform operability across heterogeneous devices. We fully open-source the benchmark, dataset, and codebase, establishing a scalable foundation for household robots to perform smartphone-related tasks.
π Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have enabled their use as intelligent agents for smartphone operation. However, existing methods depend on the Android Debug Bridge (ADB) for data transmission and action execution, limiting their applicability to Android devices. In this work, we introduce the novel Embodied Smartphone Operation (ESO) task and present See-Control, a framework that enables smartphone operation via direct physical interaction with a low-DoF robotic arm, offering a platform-agnostic solution. See-Control comprises three key components: (1) an ESO benchmark with 155 tasks and corresponding evaluation metrics; (2) an MLLM-based embodied agent that generates robotic control commands without requiring ADB or system back-end access; and (3) a richly annotated dataset of operation episodes, offering valuable resources for future research. By bridging the gap between digital agents and the physical world, See-Control provides a concrete step toward enabling home robots to perform smartphone-dependent tasks in realistic environments.