🤖 AI Summary
Existing SE(2)-invariant neural networks for motion prediction and multi-agent simulation suffer from O(N²) memory complexity due to explicit computation of all pairwise relative poses, hindering scalability to large-scale scenes.
Method: We propose the first SE(2)-invariant attention mechanism with linear memory complexity—O(N)—by reformulating scaled dot-product attention using relative pose encoding grounded in group-invariance theory, ensuring invariance while depending only on pairwise relative poses. The mechanism integrates seamlessly into standard Transformer architectures without modifying the backbone.
Contribution/Results: Our method achieves state-of-the-art performance on motion prediction and cooperative planning benchmarks, significantly outperforming non-invariant baselines while preserving strong generalization across unseen configurations and scalability to large agent counts. It establishes a new, computationally efficient paradigm for modeling SE(2)-symmetric spatial relationships in autonomous driving systems.
📝 Abstract
Processing spatial data is a key component in many learning tasks for autonomous driving such as motion forecasting, multi-agent simulation, and planning. Prior works have demonstrated the value in using SE(2) invariant network architectures that consider only the relative poses between objects (e.g. other agents, scene features such as traffic lanes). However, these methods compute the relative poses for all pairs of objects explicitly, requiring quadratic memory. In this work, we propose a mechanism for SE(2) invariant scaled dot-product attention that requires linear memory relative to the number of objects in the scene. Our SE(2) invariant transformer architecture enjoys the same scaling properties that have benefited large language models in recent years. We demonstrate experimentally that our approach is practical to implement and improves performance compared to comparable non-invariant architectures.