🤖 AI Summary
This work addresses the performance bottlenecks in 3D object annotation caused by occlusion, viewpoint variation, and spatial complexity by proposing Tri-MARF, a novel framework that introduces, for the first time, a trimodal multi-agent collaboration mechanism to jointly process 2D multi-view images, 3D point clouds, and textual descriptions. By decoupling and co-optimizing visual-language understanding, information selection, and semantic-geometric alignment capabilities, the method significantly enhances both annotation accuracy and scalability. Evaluated on the Objaverse and LVIS datasets, the model achieves a CLIPScore of 88.7 and ViLT R@5 accuracies of 45.2 and 43.8, respectively, while demonstrating high-throughput performance by annotating 12,000 objects per hour on a single A100 GPU.
📝 Abstract
Driven by applications in autonomous driving robotics and augmented reality 3D object annotation presents challenges beyond 2D annotation including spatial complexity occlusion and viewpoint inconsistency Existing approaches based on single models often struggle to address these issues effectively We propose Tri MARF a novel framework that integrates tri modal inputs including 2D multi view images textual descriptions and 3D point clouds within a multi agent collaborative architecture to enhance large scale 3D annotation Tri MARF consists of three specialized agents a vision language model agent for generating multi view descriptions an information aggregation agent for selecting optimal descriptions and a gating agent that aligns textual semantics with 3D geometry for refined captioning Extensive experiments on Objaverse LVIS Objaverse XL and ABO demonstrate that Tri MARF substantially outperforms existing methods achieving a CLIPScore of 88 point 7 compared to prior state of the art methods retrieval accuracy of 45 point 2 and 43 point 8 on ViLT R at 5 and a throughput of up to 12000 objects per hour on a single NVIDIA A100 GPU