On the Completeness of Invariant Geometric Deep Learning Models

📅 2024-02-07
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
The theoretical expressive power of E(3)-invariant/equivariant geometric deep learning models—particularly for 3D point clouds—remains poorly characterized. Method: We establish a rigorous theory for E(3)-completeness on 3D point clouds by unifying group representation theory and geometric invariant theory, yielding a general analytical framework. Contribution/Results: We provide the first formal proof that GeoNGNN overcomes expressivity bottlenecks on highly symmetric point clouds and achieves E(3)-completeness. Moreover, we uniformly demonstrate that DisGNN, various subgraph-based GNNs, and prominent models—including DimeNet, GemNet, and SphereNet—also satisfy E(3)-completeness. This work fills a fundamental gap in the theoretical analysis of geometric deep learning by introducing the first universal, verifiable criterion for E(3)-completeness. It establishes a principled foundation for both model design and evaluation, enabling rigorous comparison and advancement of E(3)-equivariant architectures.

Technology Category

Application Category

📝 Abstract
Invariant models, one important class of geometric deep learning models, are capable of generating meaningful geometric representations by leveraging informative geometric features in point clouds. These models are characterized by their simplicity, good experimental results and computational efficiency. However, their theoretical expressive power still remains unclear, restricting a deeper understanding of the potential of such models. In this work, we concentrate on characterizing the theoretical expressiveness of a wide range of invariant models under fully-connected conditions. We first rigorously characterize the expressiveness of the most classic invariant model, message-passing neural networks incorporating distance (DisGNN), restricting its unidentifiable cases to be only highly symmetric point clouds. We then prove that GeoNGNN, the geometric counterpart of one of the simplest subgraph graph neural networks, can effectively break these corner cases' symmetry and thus achieve E(3)-completeness. By leveraging GeoNGNN as a theoretical tool, we further prove that: 1) most subgraph GNNs developed in traditional graph learning can be seamlessly extended to geometric scenarios with E(3)-completeness; 2) DimeNet, GemNet and SphereNet, three well-established invariant models, are also all capable of achieving E(3)-completeness. Our theoretical results fill the gap in the expressive power of invariant models, contributing to a rigorous and comprehensive understanding of their capabilities.
Problem

Research questions and friction points this paper is trying to address.

Characterizing theoretical expressiveness of invariant geometric deep learning models.
Proving E(3)-completeness for GeoNGNN and other invariant models.
Extending subgraph GNNs to geometric scenarios with E(3)-completeness.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Characterizes expressiveness of invariant geometric models.
Proves GeoNGNN achieves E(3)-completeness in symmetry cases.
Extends subgraph GNNs to geometric scenarios with completeness.
🔎 Similar Papers
No similar papers found.
Zian Li
Zian Li
Peking University
Graph Neural Networks
X
Xiyuan Wang
Institute for Artificial Intelligence, Peking University; School of Intelligence Science and Technology, Peking University
Shijia Kang
Shijia Kang
Peking University
LLMs
Muhan Zhang
Muhan Zhang
Peking University
Machine LearningGraph Neural NetworkLarge Language Models