π€ AI Summary
This study addresses the risks of misplaced trust in generative AIβdriven conversational pedestrian navigation systems, which are vulnerable to dark patterns and insufficient explainability. The work presents the first systematic distinction between intentional manipulation and unintentional harm, proposing a novel paradigm for trustworthy navigation that integrates neuro-symbolic architectures, verifiable path planning, and seamful design. By coupling large language models with verifiable decision-making mechanisms, the framework enhances system transparency and user agency while preserving personalized guidance. This approach offers a practical and actionable pathway toward designing trustworthy conversational navigation systems grounded in both technical robustness and human-centered principles.
π Abstract
As pedestrian navigation increasingly experiments with Generative AI, and in particular Large Language Models, the nature of routing risks transforming from a verifiable geometric task into an opaque, persuasive dialogue. While conversational interfaces promise personalisation, they introduce risks of manipulation and misplaced trust. We categorise these risks using a 2x2 framework based on intent and origin, distinguishing between intentional manipulations (dark patterns) and unintended harms (explainability pitfalls). We propose seamful design strategies to mitigate these harms. We suggest that one robust way to operationalise trustworthy conversational navigation is through neuro-symbolic architecture, where verifiable pathfinding algorithms ground GenAI's persuasive capabilities, ensuring systems explain their limitations and incentives as clearly as they explain the route.