🤖 AI Summary
Addressing the challenge of automating end-to-end testing for multi-user real-time collaboration features (e.g., audio/video calls, live streaming) on mobile platforms, this paper proposes MAdroid, an LLM-driven multi-agent framework. MAdroid innovatively decomposes the testing process into three specialized agents—Coordinator (orchestrating test flow), Operator (executing UI actions via UiAutomator/ADB), and Observer (monitoring system behavior and evaluating action similarity)—thereby transcending the conventional single-tester paradigm to enable dynamic, cross-device, closed-loop validation. Technically, it integrates large language models, multi-agent systems, mobile UI automation, and behavioral similarity assessment. Evaluated on 41 real-world multi-user interaction tasks, MAdroid achieves an 82.9% task completion rate and 96.8% action similarity—significantly outperforming baseline approaches—and uncovers 11 industrial-grade defects, demonstrating both technical efficacy and practical deployment value.
📝 Abstract
The growing dependence on mobile phones and their apps has made multi-user interactive features, like chat calls, live streaming, and video conferencing, indispensable for bridging the gaps in social connectivity caused by physical and situational barriers. However, automating these interactive features for testing is fraught with challenges, owing to their inherent need for timely, dynamic, and collaborative user interactions, which current automated testing methods inadequately address. Inspired by the concept of agents designed to autonomously and collaboratively tackle problems, we propose MAdroid, a novel multi-agent approach powered by the Large Language Models (LLMs) to automate the multi-user interactive task for app feature testing. Specifically, MAdroid employs two functional types of multi-agents: user agents (Operator) and supervisor agents (Coordinator and Observer). Each agent takes a specific role: the Coordinator directs the interactive task; the Operator mimics user interactions on the device; and the Observer monitors and reviews the task automation process. Our evaluation, which included 41 multi-user interactive tasks, demonstrates the effectiveness of our approach, achieving 82.9% of the tasks with 96.8% action similarity, outperforming the ablation studies and state-of-the-art baselines. Additionally, a preliminary investigation underscores MAdroid's practicality by helping identify 11 multi-user interactive bugs during regression app testing, confirming its potential value in real-world software development contexts.