SeqAffordSplat: Scene-level Sequential Affordance Reasoning on 3D Gaussian Splatting

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) methods support only single-object, single-step functional reasoning, limiting their applicability to long-horizon, multi-object 3D scenes. Method: We introduce the first *sequential functional reasoning* task for 3DGS scenes, accompanied by a large-scale benchmark comprising 1,800+ diverse scenarios. Our framework leverages a large language model (LLM) to autoregressively generate function descriptions annotated with instance segmentation masks, integrated with conditional geometric reconstruction pretraining and multi-scale 2D vision foundation model feature injection. Contribution/Results: Experiments demonstrate significant performance gains over prior methods on the new benchmark. Our approach achieves the first end-to-end, scene-level sequential prediction of functional regions—spanning multiple steps, multiple objects, and spatially coherent zones—enabling precise, long-horizon functional understanding in real-world 3D environments and advancing embodied agent capabilities for extended task execution.

Technology Category

Application Category

📝 Abstract
3D affordance reasoning, the task of associating human instructions with the functional regions of 3D objects, is a critical capability for embodied agents. Current methods based on 3D Gaussian Splatting (3DGS) are fundamentally limited to single-object, single-step interactions, a paradigm that falls short of addressing the long-horizon, multi-object tasks required for complex real-world applications. To bridge this gap, we introduce the novel task of Sequential 3D Gaussian Affordance Reasoning and establish SeqAffordSplat, a large-scale benchmark featuring 1800+ scenes to support research on long-horizon affordance understanding in complex 3DGS environments. We then propose SeqSplatNet, an end-to-end framework that directly maps an instruction to a sequence of 3D affordance masks. SeqSplatNet employs a large language model that autoregressively generates text interleaved with special segmentation tokens, guiding a conditional decoder to produce the corresponding 3D mask. To handle complex scene geometry, we introduce a pre-training strategy, Conditional Geometric Reconstruction, where the model learns to reconstruct complete affordance region masks from known geometric observations, thereby building a robust geometric prior. Furthermore, to resolve semantic ambiguities, we design a feature injection mechanism that lifts rich semantic features from 2D Vision Foundation Models (VFM) and fuses them into the 3D decoder at multiple scales. Extensive experiments demonstrate that our method sets a new state-of-the-art on our challenging benchmark, effectively advancing affordance reasoning from single-step interactions to complex, sequential tasks at the scene level.
Problem

Research questions and friction points this paper is trying to address.

Extends 3D affordance reasoning to sequential multi-object tasks
Introduces a benchmark for long-horizon affordance in 3DGS scenes
Proposes a framework mapping instructions to 3D affordance mask sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

SeqSplatNet maps instructions to 3D affordance masks
Conditional Geometric Reconstruction pre-trains geometric priors
Feature injection fuses 2D semantic into 3D
🔎 Similar Papers
No similar papers found.