Less Redundancy: Boosting Practicality of Vision Language Model in Walking Assistants

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) for guided navigation exhibit excessive output redundancy, insufficient risk perception, and non-adaptive alerting—leading to information overload and delayed responses. To address these issues, we propose WalkVLM-LR: (1) a human-preference-driven reward framework jointly optimizing conciseness, accuracy, fluency, and keyword density; (2) an environment-aware discriminator with a shared visual encoder that enables risk-triggered, adaptive alert generation; and (3) low-redundancy inference via the GRPO optimization framework. Experiments demonstrate state-of-the-art performance across all evaluation metrics: output redundancy is reduced by 38.2%, average response latency decreases by 215 ms, and environmental comprehension efficiency and safety for visually impaired users are significantly enhanced.

Technology Category

Application Category

📝 Abstract
Approximately 283 million people worldwide live with visual impairments, motivating increasing research into leveraging Visual Language Models (VLMs) to develop effective walking assistance systems for blind and low vision individuals. However, existing VLMs in walking assistant task often have outputs that contain considerable redundancy and extraneous details, adversely affecting users' ability to accurately assess their surroundings. Moreover, these models typically lack the capability to proactively assess environmental risks and adaptively trigger reminders based on the appropriate scene, leading to excessive temporal redundancy. To mitigate output and temporal redundancy, we propose WalkVLM-LR, a walking assistance model with less redundancy. To reduce output redundancy, we introduce four human-preference-based custom reward functions within the GRPO-based reasoning framework to optimize the output in terms of conciseness, fluency, keyword density, and accuracy, thereby producing more informative and streamlined outputs. To minimize temporal redundancy, we incorporate an environment awareness discriminator, which shares the visual encoder with the VLMs to reduce redundant computations and enhance discriminative efficiency, to make WalkVLM-LR assess scene risk levels and minimize unnecessary reminders. Experimental results demonstrate that our method achieves state-of-the-art performance across all evaluation metrics compared with other models, particularly in output conciseness and less temporal redundancy.
Problem

Research questions and friction points this paper is trying to address.

Reducing output redundancy in vision language models
Minimizing temporal redundancy for adaptive reminders
Enhancing walking assistance for visually impaired individuals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-preference reward functions optimize concise outputs
Environment awareness discriminator reduces redundant computations
GRPO-based reasoning framework enhances discriminative efficiency
🔎 Similar Papers
No similar papers found.
C
Chongyang Li
Pattern Recognition Center, WeChat AI, Tencent Inc, China
Y
Yuan Zhiqiang
Pattern Recognition Center, WeChat AI, Tencent Inc, China
J
Jiapei Zhang
Pattern Recognition Center, WeChat AI, Tencent Inc, China
Y
Ying Deng
Pattern Recognition Center, WeChat AI, Tencent Inc, China
H
Hanbo Bi
Pattern Recognition Center, WeChat AI, Tencent Inc, China
Z
Zexi Jia
Pattern Recognition Center, WeChat AI, Tencent Inc, China
Xiaoyue Duan
Xiaoyue Duan
Beihang University
image/video generationmusic generation
P
Peixiang Luo
Pattern Recognition Center, WeChat AI, Tencent Inc, China
Jinchao Zhang
Jinchao Zhang
WeChat AI - Pattern Recognition Center
Deep LearningNatural Language ProcessingMachine TranslationDialogue System