🤖 AI Summary
To address high computational redundancy, low sampling efficiency, and architectural complexity in 3D Transformers for large-scale LiDAR point clouds, this paper proposes DTA-Former—a hierarchical Transformer architecture for efficient point cloud representation learning and dense prediction. Its core contributions include: (i) Dynamic Token Aggregation (DTA), a novel geometry-aware mechanism for adaptive feature compression; (ii) Learnable Token Sparsification (LTS) and Iterative Token Reconstruction (ITR), jointly optimizing representational capacity and computational efficiency; and (iii) a lightweight W-net backbone with a Dual-Attention Global Feature Enhancement (GFE) module to strengthen long-range contextual modeling. On ModelNet40, DTA-Former achieves state-of-the-art classification accuracy while significantly reducing FLOPs and memory consumption. For dense prediction tasks—including semantic segmentation on SemanticKITTI—it attains new state-of-the-art performance, demonstrating superior efficiency–accuracy trade-offs across benchmarks.
📝 Abstract
Recently, LiDAR point cloud processing and analysis have made great progress due to the development of 3D Transformers. However, existing 3D Transformer methods usually are computationally expensive and inefficient due to their huge and redundant attention maps. They also tend to be slow due to requiring time-consuming point cloud sampling and grouping processes. To address these issues, we propose an efficient point TransFormer with Dynamic Token Aggregating (DTA-Former) for point cloud representation and processing. Firstly, we propose an efficient Learnable Token Sparsification (LTS) block, which considers both local and global semantic information for the adaptive selection of key tokens. Secondly, to achieve the feature aggregation for sparsified tokens, we present the first Dynamic Token Aggregating (DTA) block in the 3D Transformer paradigm, providing our model with strong aggregated features while preventing information loss. After that, a dual-attention Transformer-based Global Feature Enhancement (GFE) block is used to improve the representation capability of the model. Equipped with LTS, DTA, and GFE blocks, DTA-Former achieves excellent classification results via hierarchical feature learning. Lastly, a novel Iterative Token Reconstruction (ITR) block is introduced for dense prediction whereby the semantic features of tokens and their semantic relationships are gradually optimized during iterative reconstruction. Based on ITR, we propose a new W-net architecture, which is more suitable for Transformer-based feature learning than the common U-net design.