🤖 AI Summary
This work addresses the challenge that existing vision-language models often produce semantically inconsistent descriptions of the same object across multiple viewpoints, hindering the formation of persistent and coherent semantic representations. To resolve this, the authors propose a memory-augmented vision-language agent that jointly optimizes data association, description generation, and exploration policy within a unified autoregressive framework. Central to this approach is the introduction of an object-level episodic memory mechanism—the first of its kind—to enforce cross-temporal semantic consistency. The method integrates self-supervised training, disagreement-based exploration, and pseudo-label consistency constraints, yielding significant performance gains on a human-annotated test set: up to an 11.86% improvement in standard captioning metrics, a 7.39% increase in description self-similarity, and scalable inference enabled by compact scene representations.
📝 Abstract
Vision-Language Models (VLMs) often yield inconsistent descriptions of the same object across viewpoints, hindering the ability of embodied agents to construct consistent semantic representations over time. Previous methods resolved inconsistencies using offline multi-view aggregation or multi-stage pipelines that decouple exploration, data association, and caption learning, with limited capacity to reason over previously observed objects. In this paper, we introduce a unified, memory-augmented Vision-Language agent that simultaneously handles data association, object captioning, and exploration policy within a single autoregressive framework. The model processes the current RGB observation, a top-down explored map, and an object-level episodic memory serialized into object-level tokens, ensuring persistent object identity and semantic consistency across extended sequences. To train the model in a self-supervised manner, we collect a dataset in photorealistic 3D environments using a disagreement-based policy and a pseudo-captioning model that enforces consistency across multi-view caption histories. Extensive evaluation on a manually annotated object-level test set, demonstrate improvements of up to +11.86% in standard captioning scores and +7.39% in caption self-similarity over baseline models, while enabling scalable performance through a compact scene representation. Code, model weights, and data are available at https://github.com/hsp-iit/epos-vlm