Sign Language: Towards Sign Understanding for Robot Autonomy

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces and formally defines the novel task of “navigation sign understanding,” which aims to enable autonomous robots to extract spatial semantic information from symbolic signage (e.g., directional arrows and location labels) to support scene understanding and navigation. Methodologically, we construct the first benchmark dataset of sign imagery—comprising over 160 images spanning diverse real-world scenarios and exhibiting high visual variability—and design dedicated evaluation metrics tailored for symbolic spatial reasoning. We further propose an end-to-end framework leveraging vision-language models (VLMs) to jointly perform image-text alignment and spatial semantic inference. Experimental results demonstrate strong generalization across complex, realistic signage and significant improvements in robotic environmental understanding. To foster reproducibility and community advancement, both the code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Signage is an ubiquitous element of human environments, playing a critical role in both scene understanding and navigation. For autonomous systems to fully interpret human environments, effectively parsing and understanding signs is essential. We introduce the task of navigational sign understanding, aimed at extracting navigational cues from signs that convey symbolic spatial information about the scene. Specifically, we focus on signs capturing directional cues that point toward distant locations and locational cues that identify specific places. To benchmark performance on this task, we curate a comprehensive test set, propose appropriate evaluation metrics, and establish a baseline approach. Our test set consists of over 160 images, capturing signs with varying complexity and design across a wide range of public spaces, such as hospitals, shopping malls, and transportation hubs. Our baseline approach harnesses Vision-Language Models (VLMs) to parse navigational signs under these high degrees of variability. Experiments show that VLMs offer promising performance on this task, potentially motivating downstream applications in robotics. The code and dataset are available on Github.
Problem

Research questions and friction points this paper is trying to address.

Understanding navigational signs for robot autonomy
Extracting directional and locational cues from signs
Benchmarking sign parsing using Vision-Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Vision-Language Models for sign parsing
Focuses on directional and locational sign cues
Tests on 160+ diverse public space images
🔎 Similar Papers
No similar papers found.
A
Ayush Agrawal
Smart Systems Institute, National University of Singapore
J
Joel Loo
Smart Systems Institute, National University of Singapore
Nicky Zimmerman
Nicky Zimmerman
Research Fellow, National University of Singapore
roboticslocalizationSLAMnavigation
David Hsu
David Hsu
Professor of Computer Science, National University of Singapore
RoboticsAIComputational Biology