Spatio-Temporal Grounding of Large Language Models from Perception Streams

πŸ“… 2026-04-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current small-scale large language models exhibit limited performance in fine-grained spatial relations, metric distance reasoning, and temporal sequencing, hindering the capabilities of embodied agents in dynamic 3D environments. To address this, this work proposes the FESTS framework, which introduces SpREβ€”a novel spatiotemporal regular expression formalism that integrates regular expressions with S4u spatial logic and supports both universal and existential quantifiers. The framework compiles natural language queries into formal spatiotemporal specifications and automatically generates large-scale, aligned training data by matching structured video logs, eliminating the need for manual annotation. Fine-tuning a 3-billion-parameter model on 27k synthesized samples boosts frame-level F1 score from 48.5% to 87.5%, achieving performance comparable to GPT-4.1 while being two orders of magnitude smaller in model size.
πŸ“ Abstract
Embodied-AI agents must reason about how objects move and interact in 3-D space over time, yet existing smaller frontier Large Language Models (LLMs) still mis-handle fine-grained spatial relations, metric distances, and temporal orderings. We introduce the general framework Formally Explainable Spatio-Temporal Scenes (FESTS) that injects verifiable spatio-temporal supervision into an LLM by compiling natural-language queries into Spatial Regular Expression (SpRE) -- a language combining regular expression syntax with S4u spatial logic and extended here with universal and existential quantification. The pipeline matches each SpRE against any structured video log and exports aligned (query, frames, match, explanation) tuples, enabling unlimited training data without manual labels. Training a 3-billion-parameter model on 27k such tuples boosts frame-level F1 from 48.5% to 87.5%, matching GPT-4.1 on complex spatio-temporal reasoning while remaining two orders of magnitude smaller, and, hence, enabling spatio-temporal intelligence for Video LLM.
Problem

Research questions and friction points this paper is trying to address.

Spatio-Temporal Reasoning
Large Language Models
Embodied AI
Spatial Relations
Temporal Ordering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-Temporal Reasoning
Spatial Regular Expression
Formally Explainable Framework
Video LLM
Embodied AI
πŸ”Ž Similar Papers
No similar papers found.