VL-KnG: Visual Scene Understanding for Navigation Goal Identification using Spatiotemporal Knowledge Graphs

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) face three key bottlenecks in embodied robot navigation: (1) absence of persistent scene memory, (2) weak spatial reasoning capabilities, and (3) inability to process long-horizon video streams in real time. To address these, we propose VL-KnG—a system that integrates chunked video understanding, dynamic knowledge graph construction, and efficient graph querying to build an updatable spatiotemporal knowledge graph with cross-frame object identity consistency, enabling continual scene understanding and interpretable spatial reasoning. Our contributions are threefold: (1) the first structured visual understanding framework explicitly designed for embodied navigation; (2) the open-source WalkieKnowledge benchmark for evaluating long-horizon visual grounding and spatial reasoning; and (3) real-robot deployment achieving 77.27% task success rate and 76.92% question-answering accuracy—comparable to Gemini 2.5 Pro—while supporting low-latency, multi-task real-time inference.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have shown potential for robot navigation but encounter fundamental limitations: they lack persistent scene memory, offer limited spatial reasoning, and do not scale effectively with video duration for real-time application. We present VL-KnG, a Visual Scene Understanding system that tackles these challenges using spatiotemporal knowledge graph construction and computationally efficient query processing for navigation goal identification. Our approach processes video sequences in chunks utilizing modern VLMs, creates persistent knowledge graphs that maintain object identity over time, and enables explainable spatial reasoning through queryable graph structures. We also introduce WalkieKnowledge, a new benchmark with about 200 manually annotated questions across 8 diverse trajectories spanning approximately 100 minutes of video data, enabling fair comparison between structured approaches and general-purpose VLMs. Real-world deployment on a differential drive robot demonstrates practical applicability, with our method achieving 77.27% success rate and 76.92% answer accuracy, matching Gemini 2.5 Pro performance while providing explainable reasoning supported by the knowledge graph, computational efficiency for real-time deployment across different tasks, such as localization, navigation and planning. Code and dataset will be released after acceptance.
Problem

Research questions and friction points this paper is trying to address.

Overcoming limitations in persistent scene memory for robot navigation
Enabling explainable spatial reasoning through queryable graph structures
Achieving computational efficiency for real-time navigation goal identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs persistent spatiotemporal knowledge graphs from video
Enables explainable spatial reasoning through queryable graph structures
Achieves real-time efficiency with chunk-based video processing
🔎 Similar Papers
No similar papers found.
M
Mohamad Al Mdfaa
Applied AI Institute
S
Svetlana Lukina
Applied AI Institute
T
Timur Akhtyamov
Applied AI Institute
A
Arthur Nigmatzyanov
Applied AI Institute
D
Dmitrii Nalberskii
Applied AI Institute
Sergey Zagoruyko
Sergey Zagoruyko
polynome.ai
Artificial IntelligenceComputer VisionMachine Learning
Gonzalo Ferrer
Gonzalo Ferrer
Skolkovo Institute of Science and Technology
Robotics