LLM-Explorer: Towards Efficient and Affordable LLM-based Exploration for Mobile Apps

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-driven mobile application exploration methods invoke large language models (LLMs) at every step to generate actions, incurring prohibitive token consumption and computational overhead; moreover, most UI interactions do not require LLM capabilities and are susceptible to LLM biases. Method: We propose a knowledge-centric lightweight exploration paradigm that repositions the LLM as a dynamic knowledge maintainer—not an action generator—enabling LLM-free action selection. Our approach leverages state abstraction, compact knowledge representation, rule- and heuristic-based action generation, and online knowledge refinement. Contribution/Results: Evaluated on 20 representative mobile apps, our method achieves the highest coverage and fastest exploration speed among baselines. It reduces token cost by 148× compared to state-of-the-art approaches, while demonstrating superior efficiency, robustness, and scalability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have opened new opportunities for automated mobile app exploration, an important and challenging problem that used to suffer from the difficulty of generating meaningful UI interactions. However, existing LLM-based exploration approaches rely heavily on LLMs to generate actions in almost every step, leading to a huge cost of token fees and computational resources. We argue that such extensive usage of LLMs is neither necessary nor effective, since many actions during exploration do not require, or may even be biased by the abilities of LLMs. Further, based on the insight that a precise and compact knowledge plays the central role for effective exploration, we introduce LLM-Explorer, a new exploration agent designed for efficiency and affordability. LLM-Explorer uses LLMs primarily for maintaining the knowledge instead of generating actions, and knowledge is used to guide action generation in a LLM-less manner. Based on a comparison with 5 strong baselines on 20 typical apps, LLM-Explorer was able to achieve the fastest and highest coverage among all automated app explorers, with over 148x lower cost than the state-of-the-art LLM-based approach.
Problem

Research questions and friction points this paper is trying to address.

Reducing high token fees in LLM-based mobile app exploration
Minimizing unnecessary LLM usage for UI action generation
Achieving efficient app coverage with lower computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs for maintaining knowledge, not actions
Guides action generation without LLMs
Achieves high coverage with lower cost
🔎 Similar Papers
No similar papers found.