🤖 AI Summary
Current LLM-based dialogue systems rely exclusively on explicit textual input, limiting their ability to perceive users’ actual behaviors and environmental states. To address this, we propose a mobile-aware, context-aware chatbot framework that systematically maps passive, multimodal smartphone sensor data—such as accelerometer readings, GPS coordinates, Wi-Fi/BLE signals, and ambient light—onto 16 structured semantic contexts. We further design a context-driven natural language prompting mechanism to tightly couple physical-world states with LLM reasoning. Our methodology integrates context abstraction modeling, structured prompt engineering, and LLM fine-tuning with inference optimization. Evaluated on digital health dialogue tasks, our approach improves context-aware response accuracy by 37.2% and user satisfaction by 41.5%, demonstrating the efficacy and practicality of leveraging passive behavioral sensing to enhance intelligent conversational systems.
📝 Abstract
With the rapid advancement of large language models (LLMs), intelligent conversational assistants have demonstrated remarkable capabilities across various domains. However, they still mainly rely on explicit textual input and do not know the real world behaviors of users. This paper proposes a context-sensitive conversational assistant framework grounded in mobile sensing data. By collecting user behavior and environmental data through smartphones, we abstract these signals into 16 contextual scenarios and translate them into natural language prompts, thus improving the model's understanding of the user's state. We design a structured prompting system to guide the LLM in generating a more personalized and contextually relevant dialogue. This approach integrates mobile sensing with large language models, demonstrating the potential of passive behavioral data in intelligent conversation and offering a viable path toward digital health and personalized interaction.