🤖 AI Summary
This work addresses the challenge of automatically evaluating the functional usability of 3D scenes for embodied agents—i.e., whether a scene enables agents to perform meaningful interactive tasks—by introducing SceneTeract, a novel framework that decomposes high-level semantic activities into atomic action sequences. Integrating agent specifications with geometric and physical simulation, SceneTeract systematically verifies constraints such as reachability, spatial clearance, and navigability. As the first method dedicated to automated functional usability assessment for embodied agents, SceneTeract uncovers a systematic bias in current vision-language models (VLMs) between semantic plausibility and physical feasibility. Leveraging this insight, the framework employs the identified discrepancies as reward signals in reinforcement learning-based post-training distillation to enhance VLMs’ geometric reasoning capabilities. Experiments on synthetic indoor environments reveal numerous functional defects, effectively exposing and mitigating prevalent VLM misjudgments regarding scene usability.
📝 Abstract
Embodied AI depends on interactive 3D environments that support meaningful activities for diverse users, yet assessing their functional affordances remains a core challenge. We introduce SceneTeract, a framework that verifies 3D scene functionality under agent-specific constraints. Our core contribution is a grounded verification engine that couples high-level semantic reasoning with low-level geometric checks. SceneTeract decomposes complex activities into sequences of atomic actions and validates each step against accessibility requirements (e.g., reachability, clearance, and navigability) conditioned on an embodied agent profile, using explicit physical and geometric simulations. We deploy SceneTeract to perform an in-depth evaluation of (i) synthetic indoor environments, uncovering frequent functional failures that prevent basic interactions, and (ii) the ability of frontier Vision-Language Models (VLMs) to reason about and predict functional affordances, revealing systematic mismatches between semantic confidence and physical feasibility even for the strongest current models. Finally, we leverage SceneTeract as a reward engine for VLM post-training, enabling scalable distillation of geometric constraints into reasoning models. We release the SceneTeract verification suite and data to bridge perception and physical reality in embodied 3D scene understanding.