🤖 AI Summary
Vision-language models (VLMs) face three key bottlenecks in embodied robot navigation: (1) absence of persistent scene memory, (2) weak spatial reasoning capabilities, and (3) inability to process long-horizon video streams in real time. To address these, we propose VL-KnG—a system that integrates chunked video understanding, dynamic knowledge graph construction, and efficient graph querying to build an updatable spatiotemporal knowledge graph with cross-frame object identity consistency, enabling continual scene understanding and interpretable spatial reasoning. Our contributions are threefold: (1) the first structured visual understanding framework explicitly designed for embodied navigation; (2) the open-source WalkieKnowledge benchmark for evaluating long-horizon visual grounding and spatial reasoning; and (3) real-robot deployment achieving 77.27% task success rate and 76.92% question-answering accuracy—comparable to Gemini 2.5 Pro—while supporting low-latency, multi-task real-time inference.
📝 Abstract
Vision-language models (VLMs) have shown potential for robot navigation but encounter fundamental limitations: they lack persistent scene memory, offer limited spatial reasoning, and do not scale effectively with video duration for real-time application. We present VL-KnG, a Visual Scene Understanding system that tackles these challenges using spatiotemporal knowledge graph construction and computationally efficient query processing for navigation goal identification. Our approach processes video sequences in chunks utilizing modern VLMs, creates persistent knowledge graphs that maintain object identity over time, and enables explainable spatial reasoning through queryable graph structures. We also introduce WalkieKnowledge, a new benchmark with about 200 manually annotated questions across 8 diverse trajectories spanning approximately 100 minutes of video data, enabling fair comparison between structured approaches and general-purpose VLMs. Real-world deployment on a differential drive robot demonstrates practical applicability, with our method achieving 77.27% success rate and 76.92% answer accuracy, matching Gemini 2.5 Pro performance while providing explainable reasoning supported by the knowledge graph, computational efficiency for real-time deployment across different tasks, such as localization, navigation and planning. Code and dataset will be released after acceptance.