Efficient3D: A Unified Framework for Adaptive and Debiased Token Reduction in 3D MLLMs

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational overhead of 3D multimodal large language models, which stems from their high-dimensional inputs and massive architectures, hindering deployment on resource-constrained platforms. To tackle this challenge, the authors propose a unified visual token pruning framework comprising a Debiasing Visual Token Importance Estimator (DVTIE) and an Adaptive Token Rebalancing (ATR) strategy. By leveraging attention-driven importance estimation, dynamically adjusting pruning intensity, and balancing cross-layer attention, the framework achieves context-aware, efficient model compression. Evaluated across five 3D vision-language benchmarks, the method surpasses the unpruned baseline, notably improving CIDEr by 2.57% on the Scan2Cap dataset while substantially reducing computational costs—effectively preserving semantic integrity without compromising efficiency.
📝 Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have expanded reasoning capabilities into 3D domains, enabling fine-grained spatial understanding. However, the substantial size of 3D MLLMs and the high dimensionality of input features introduce considerable inference overhead, which limits practical deployment on resource constrained platforms. To overcome this limitation, this paper presents Efficient3D, a unified framework for visual token pruning that accelerates 3D MLLMs while maintaining competitive accuracy. The proposed framework introduces a Debiased Visual Token Importance Estimator (DVTIE) module, which considers the influence of shallow initial layers during attention aggregation, thereby producing more reliable importance predictions for visual tokens. In addition, an Adaptive Token Rebalancing (ATR) strategy is developed to dynamically adjust pruning strength based on scene complexity, preserving semantic completeness and maintaining balanced attention across layers. Together, they enable context-aware token reduction that maintains essential semantics with lower computation. Comprehensive experiments conducted on five representative 3D vision and language benchmarks, including ScanRefer, Multi3DRefer, Scan2Cap, ScanQA, and SQA3D, demonstrate that Efficient3D achieves superior performance compared with unpruned baselines, with a +2.57% CIDEr improvement on the Scan2Cap dataset. Therefore, Efficient3D provides a scalable and effective solution for efficient inference in 3D MLLMs. The code is released at: https://github.com/sol924/Efficient3D
Problem

Research questions and friction points this paper is trying to address.

3D MLLMs
inference overhead
resource-constrained deployment
visual token reduction
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient3D
Debiased Visual Token Importance Estimator
Adaptive Token Rebalancing
3D MLLMs
Token Pruning
🔎 Similar Papers
2024-06-27Conference on Empirical Methods in Natural Language ProcessingCitations: 2