π€ AI Summary
Existing 3D vision-language (3D-VL) benchmarks suffer from test-set bias, narrow evaluation metrics, and fragmented tasks, limiting their ability to holistically assess model capabilities. To address these limitations, we propose Beacon3Dβthe first object-centric benchmark for 3D vision-language understanding. Our method introduces: (1) an object-level multi-metric evaluation paradigm covering grounding accuracy, question-answering generalization, and cross-task consistency; (2) a grounded-QA joint analysis pipeline enabling causal, interpretable diagnostic evaluation; and (3) an LLM-augmented natural referring expression generation and robustness assessment framework. Experiments reveal significant fragility in current 3D-VL models when jointly performing grounding and QA, and further show that naΓ―ve LLM integration can degrade grounding performance. Beacon3D provides a systematic, diagnostic benchmark to advance trustworthy 3D-VL development.
π Abstract
Existing 3D vision-language (3D-VL) benchmarks fall short in evaluating 3D-VL models, creating a"mist"that obscures rigorous insights into model capabilities and 3D-VL tasks. This mist persists due to three key limitations. First, flawed test data, like ambiguous referential text in the grounding task, can yield incorrect and unreliable test results. Second, oversimplified metrics such as simply averaging accuracy per question answering (QA) pair, cannot reveal true model capability due to their vulnerability to language variations. Third, existing benchmarks isolate the grounding and QA tasks, disregarding the underlying coherence that QA should be based on solid grounding capabilities. To unveil the"mist", we propose Beacon3D, a benchmark for 3D-VL grounding and QA tasks, delivering a perspective shift in the evaluation of 3D-VL understanding. Beacon3D features (i) high-quality test data with precise and natural language, (ii) object-centric evaluation with multiple tests per object to ensure robustness, and (iii) a novel chain-of-analysis paradigm to address language robustness and model performance coherence across grounding and QA. Our evaluation of state-of-the-art 3D-VL models on Beacon3D reveals that (i) object-centric evaluation elicits true model performance and particularly weak generalization in QA; (ii) grounding-QA coherence remains fragile in current 3D-VL models, and (iii) incorporating large language models (LLMs) to 3D-VL models, though as a prevalent practice, hinders grounding capabilities and has yet to elevate QA capabilities. We hope Beacon3D and our comprehensive analysis could benefit the 3D-VL community towards faithful developments.