🤖 AI Summary
This work presents the first vision-only navigation benchmark in real commercial 3D action role-playing games, such as *Dark Souls*, addressing navigability in complex linear levels using only screen pixel inputs. The approach integrates open-source visual saliency detection, real-time frame processing, and point-of-interest identification to drive a finite-state controller for goal-directed exploration. Experimental results demonstrate that this unimodal agent exhibits effective navigation behaviors across most critical segments, validating the potential of purely visual navigation while simultaneously highlighting limitations in the robustness and generalization capabilities of current vision models. The study contributes both the first evaluation protocol for this domain and a reproducible baseline system, establishing a foundation for future research in vision-based embodied AI within challenging, realistic game environments.
📝 Abstract
Modern 3D game levels rely heavily on visual guidance, yet the navigability of level layouts remains difficult to quantify. Prior work either simulates play in simplified environments or analyzes static screenshots for visual affordances, but neither setting faithfully captures how players explore complex, real-world game levels. In this paper, we build on an existing open-source visual affordance detector and instantiate a screen-only exploration and navigation agent that operates purely from visual affordances. Our agent consumes live game frames, identifies salient interest points, and drives a simple finite-state controller over a minimal action space to explore Dark Souls-style linear levels and attempt to reach expected goal regions. Pilot experiments show that the agent can traverse most required segments and exhibits meaningful visual navigation behavior, but also highlight that limitations of the underlying visual model prevent truly comprehensive and reliable auto-navigation. We argue that this system provides a concrete, shared baseline and evaluation protocol for visual navigation in complex games, and we call for more attention to this necessary task. Our results suggest that purely vision-based sense-making models, with discrete single-modality inputs and without explicit reasoning, can effectively support navigation and environment understanding in idealized settings, but are unlikely to be a general solution on their own.