DeepThink3D: Enhancing Large Language Models with Programmatic Reasoning in Complex 3D Situated Reasoning Tasks

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited capability in multi-step tool invocation and procedural reasoning within complex 3D scenes. Method: This paper proposes a compositional iterative evolution framework to automatically generate high-difficulty 3D spatial question-answering (QA) instances and construct challenging evaluation tasks for the SQA3D benchmark. Our approach integrates API-based tool calling, program generation, chain-of-thought (CoT) prompting, and direct preference optimization (DPO) in an end-to-end manner to refine model selection of tool-chain policies. Contribution/Results: The key innovation lies in applying DPO to multi-stage tool scheduling decisions in 3D reasoning—enhancing both accuracy and robustness under complex 3D conditions. Experiments demonstrate substantial gains over baselines on SQA3D, with accuracy improvements of up to 21.4% on multi-step tool invocation and procedural reasoning tasks.

Technology Category

Application Category

📝 Abstract
This work enhances the ability of large language models (LLMs) to perform complex reasoning in 3D scenes. Recent work has addressed the 3D situated reasoning task by invoking tool usage through large language models. Large language models call tools via APIs and integrate the generated programs through a chain of thought to solve problems based on the program results. However, due to the simplicity of the questions in the dataset, the generated program reasoning chains are relatively short. To solve this main challenge, in this paper, we introduce DeepThink3D to enhance the tool usage of LLMs in complex 3D situated reasoning tasks. Our work proposes a combinatorial and iterative evolutionary approach on the SQA3D benchmark to generate more complex questions. Building on this foundation, we fine-tune the large language model to make it more proficient in using 3D tools. By employing Direct Preference Optimization (DPO), we directly optimize the toolchain strategies generated by models, thereby enhancing their accuracy in complex tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' complex reasoning in 3D scenes
Addressing short program chains in simple 3D questions
Improving tool usage accuracy for complex 3D tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combinatorial iterative evolutionary approach for complex questions
Fine-tuning LLMs for proficient 3D tool usage
Direct Preference Optimization for toolchain strategy accuracy
🔎 Similar Papers
No similar papers found.
J
Jiayi Song
Fudan University
R
Rui Wan
Fudan University
Lipeng Ma
Lipeng Ma
Fudan University
Weidong Yang
Weidong Yang
Professor of Computer Science
Big Data
Qingyuan Zhou
Qingyuan Zhou
Fudan University
Computer VisionComputer GraphicsBiomedical Engineering
Y
Yixuan Li
Fudan University
B
Ben Fei
The Chinese University of Hong Kong