🤖 AI Summary
3D large language models (LLMs) suffer from inaccurate alignment between linguistic expressions and 3D spatial elements, primarily due to training data bias toward language reasoning and insufficient high-quality 3D supervision—leading to spatial grounding errors. To address this, we propose DEER-3D, a framework that leverages error-driven, fine-grained 3D scene editing (e.g., attribute recoloring, object repositioning) to automatically generate predicate-aligned counterfactual samples, thereby constructing lightweight yet highly discriminative supervision signals. DEER-3D follows a closed-loop pipeline: decomposition → diagnostic evaluation → editing → retraining. It precisely identifies and rectifies spatial understanding deficiencies with minimal editing operations, eliminating the need for large-scale 3D data collection or complex reconstruction. Extensive experiments on multiple 3D grounding and scene understanding benchmarks demonstrate significant improvements in language–space alignment accuracy, validating counterfactual editing as an effective new paradigm for spatial grounding optimization.
📝 Abstract
Despite recent progress in 3D-LLMs, they remain limited in accurately grounding language to visual and spatial elements in 3D environments. This limitation stems in part from training data that focuses on language reasoning rather than spatial understanding due to scarce 3D resources, leaving inherent grounding biases unresolved. To address this, we propose 3D scene editing as a key mechanism to generate precise visual counterfactuals that mitigate these biases through fine-grained spatial manipulation, without requiring costly scene reconstruction or large-scale 3D data collection. Furthermore, to make these edits targeted and directly address the specific weaknesses of the model, we introduce DEER-3D, an error-driven framework following a structured "Decompose, Diagnostic Evaluation, Edit, and Re-train" workflow, rather than broadly or randomly augmenting data as in conventional approaches. Specifically, upon identifying a grounding failure of the 3D-LLM, our framework first diagnoses the exact predicate-level error (e.g., attribute or spatial relation). It then executes minimal, predicate-aligned 3D scene edits, such as recoloring or repositioning, to produce targeted counterfactual supervision for iterative model fine-tuning, significantly enhancing grounding accuracy. We evaluate our editing pipeline across multiple benchmarks for 3D grounding and scene understanding tasks, consistently demonstrating improvements across all evaluated datasets through iterative refinement. DEER-3D underscores the effectiveness of targeted, error-driven scene editing in bridging linguistic reasoning capabilities with spatial grounding in 3D LLMs.