🤖 AI Summary
Current vision-language models (VLMs) for guided navigation exhibit excessive output redundancy, insufficient risk perception, and non-adaptive alerting—leading to information overload and delayed responses. To address these issues, we propose WalkVLM-LR: (1) a human-preference-driven reward framework jointly optimizing conciseness, accuracy, fluency, and keyword density; (2) an environment-aware discriminator with a shared visual encoder that enables risk-triggered, adaptive alert generation; and (3) low-redundancy inference via the GRPO optimization framework. Experiments demonstrate state-of-the-art performance across all evaluation metrics: output redundancy is reduced by 38.2%, average response latency decreases by 215 ms, and environmental comprehension efficiency and safety for visually impaired users are significantly enhanced.
📝 Abstract
Approximately 283 million people worldwide live with visual impairments, motivating increasing research into leveraging Visual Language Models (VLMs) to develop effective walking assistance systems for blind and low vision individuals. However, existing VLMs in walking assistant task often have outputs that contain considerable redundancy and extraneous details, adversely affecting users' ability to accurately assess their surroundings. Moreover, these models typically lack the capability to proactively assess environmental risks and adaptively trigger reminders based on the appropriate scene, leading to excessive temporal redundancy. To mitigate output and temporal redundancy, we propose WalkVLM-LR, a walking assistance model with less redundancy. To reduce output redundancy, we introduce four human-preference-based custom reward functions within the GRPO-based reasoning framework to optimize the output in terms of conciseness, fluency, keyword density, and accuracy, thereby producing more informative and streamlined outputs. To minimize temporal redundancy, we incorporate an environment awareness discriminator, which shares the visual encoder with the VLMs to reduce redundant computations and enhance discriminative efficiency, to make WalkVLM-LR assess scene risk levels and minimize unnecessary reminders. Experimental results demonstrate that our method achieves state-of-the-art performance across all evaluation metrics compared with other models, particularly in output conciseness and less temporal redundancy.