🤖 AI Summary
Current vision-language models exhibit significant limitations in comprehending the real-world spatiotemporal information embedded in images, particularly in capturing physical plausibility. To address this gap, this work proposes TimeSpot—the first benchmark for visual spatiotemporal understanding grounded in real-world scenarios—comprising 1,455 ground-level images from 80 countries. Models are tasked with jointly predicting structured spatiotemporal attributes—including season, month, time of day, and geographic location—using only visual cues. Comprehensive evaluation of leading open- and closed-source models reveals consistently poor performance, especially in temporal reasoning. While supervised fine-tuning yields modest improvements, overall capabilities remain far from satisfactory, highlighting a critical deficiency in current approaches to modeling spatiotemporal physical consistency.
📝 Abstract
Geo-temporal understanding, the ability to infer location, time, and contextual properties from visual input alone, underpins applications such as disaster management, traffic planning, embodied navigation, world modeling, and geography education. Although recent vision-language models (VLMs) have advanced image geo-localization using cues like landmarks and road signs, their ability to reason about temporal signals and physically grounded spatial cues remains limited. To address this gap, we introduce TimeSpot, a benchmark for evaluating real-world geo-temporal reasoning in VLMs. TimeSpot comprises 1,455 ground-level images from 80 countries and requires structured prediction of temporal attributes (season, month, time of day, daylight phase) and geographic attributes (continent, country, climate zone, environment type, latitude-longitude) directly from visual evidence. It also includes spatial-temporal reasoning tasks that test physical plausibility under real-world uncertainty. Evaluations of state-of-the-art open- and closed-source VLMs show low performance, particularly for temporal inference. While supervised fine-tuning yields improvements, results remain insufficient, highlighting the need for new methods to achieve robust, physically grounded geo-temporal understanding. TimeSpot is available at: https://TimeSpot-GT.github.io.