🤖 AI Summary
Conventional Deep Equilibrium Models (DEQs) operate solely on sequential data and lack permutation invariance, rendering them unsuitable for discrete, unordered inputs such as point clouds or sets—i.e., discrete probability measures. Method: We propose Distributional DEQs (DDEQs), the first implicit neural network framework operating directly on measure spaces. DDEQs formulate fixed-point equations in the Wasserstein space via Wasserstein gradient flows, integrating optimal transport theory with implicit differentiation to rigorously enforce permutation invariance. We design an equivariant architecture enabling distribution-level equilibrium modeling, effecting a paradigm shift from Euclidean to distributional representation learning. Contribution/Results: On point cloud classification and completion benchmarks, DDEQs match state-of-the-art performance while using significantly fewer parameters—demonstrating both theoretical soundness and modeling efficiency.
📝 Abstract
Deep Equilibrium Models (DEQs) are a class of implicit neural networks that solve for a fixed point of a neural network in their forward pass. Traditionally, DEQs take sequences as inputs, but have since been applied to a variety of data. In this work, we present Distributional Deep Equilibrium Models (DDEQs), extending DEQs to discrete measure inputs, such as sets or point clouds. We provide a theoretically grounded framework for DDEQs. Leveraging Wasserstein gradient flows, we show how the forward pass of the DEQ can be adapted to find fixed points of discrete measures under permutation-invariance, and derive adequate network architectures for DDEQs. In experiments, we show that they can compete with state-of-the-art models in tasks such as point cloud classification and point cloud completion, while being significantly more parameter-efficient.