🤖 AI Summary
Large language models (LLMs) exhibit limited capability in multi-step tool invocation and procedural reasoning within complex 3D scenes. Method: This paper proposes a compositional iterative evolution framework to automatically generate high-difficulty 3D spatial question-answering (QA) instances and construct challenging evaluation tasks for the SQA3D benchmark. Our approach integrates API-based tool calling, program generation, chain-of-thought (CoT) prompting, and direct preference optimization (DPO) in an end-to-end manner to refine model selection of tool-chain policies. Contribution/Results: The key innovation lies in applying DPO to multi-stage tool scheduling decisions in 3D reasoning—enhancing both accuracy and robustness under complex 3D conditions. Experiments demonstrate substantial gains over baselines on SQA3D, with accuracy improvements of up to 21.4% on multi-step tool invocation and procedural reasoning tasks.
📝 Abstract
This work enhances the ability of large language models (LLMs) to perform complex reasoning in 3D scenes. Recent work has addressed the 3D situated reasoning task by invoking tool usage through large language models. Large language models call tools via APIs and integrate the generated programs through a chain of thought to solve problems based on the program results. However, due to the simplicity of the questions in the dataset, the generated program reasoning chains are relatively short. To solve this main challenge, in this paper, we introduce DeepThink3D to enhance the tool usage of LLMs in complex 3D situated reasoning tasks. Our work proposes a combinatorial and iterative evolutionary approach on the SQA3D benchmark to generate more complex questions. Building on this foundation, we fine-tune the large language model to make it more proficient in using 3D tools. By employing Direct Preference Optimization (DPO), we directly optimize the toolchain strategies generated by models, thereby enhancing their accuracy in complex tasks.