AgentSense: LLMs Empower Generalizable and Explainable Web-Based Participatory Urban Sensing

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing participatory urban sensing systems suffer from poor generalizability and opaque decision-making. This paper proposes AgentSense, a training-free multi-agent framework that integrates large language models (LLMs) into mobile-individual-based web sensing systems. AgentSense enables dynamic adaptation and interpretable decision-making through LLM-driven iterative task allocation and natural-language-based reasoning. Its core contributions are: (1) a co-evolutionary architecture synergizing classical planners with LLMs to support perturbation-responsive scheduling and self-adaptation; and (2) elimination of end-to-end training in favor of multi-agent collaboration for joint task optimization and real-time explanation generation. Evaluated on two large-scale urban datasets under seven types of dynamic disturbances, AgentSense significantly outperforms conventional methods and single-agent baselines—demonstrating superior robustness, environmental adaptability, and explanation plausibility.

Technology Category

Application Category

📝 Abstract
Web-based participatory urban sensing has emerged as a vital approach for modern urban management by leveraging mobile individuals as distributed sensors. However, existing urban sensing systems struggle with limited generalization across diverse urban scenarios and poor interpretability in decision-making. In this work, we introduce AgentSense, a hybrid, training-free framework that integrates large language models (LLMs) into participatory urban sensing through a multi-agent evolution system. AgentSense initially employs classical planner to generate baseline solutions and then iteratively refines them to adapt sensing task assignments to dynamic urban conditions and heterogeneous worker preferences, while producing natural language explanations that enhance transparency and trust. Extensive experiments across two large-scale mobility datasets and seven types of dynamic disturbances demonstrate that AgentSense offers distinct advantages in adaptivity and explainability over traditional methods. Furthermore, compared to single-agent LLM baselines, our approach outperforms in both performance and robustness, while delivering more reasonable and transparent explanations. These results position AgentSense as a significant advancement towards deploying adaptive and explainable urban sensing systems on the web.
Problem

Research questions and friction points this paper is trying to address.

Enhancing generalization across diverse urban sensing scenarios
Improving interpretability of participatory sensing decision-making
Adapting sensing tasks to dynamic urban conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs into participatory urban sensing
Refines solutions for dynamic urban conditions
Produces natural language explanations for transparency
🔎 Similar Papers
2023-08-22Frontiers Comput. Sci.Citations: 866