Interpreting Equivariant Representations

📅 2024-01-23
🏛️ International Conference on Machine Learning
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the misuse of invariant and equivariant representations in the latent spaces of equivariant neural networks, demonstrating that neglecting their inherent inductive biases severely degrades downstream task performance. To remedy this, we propose an information-preserving invariant projection method that, for the first time, systematically characterizes and models the inherent ambiguity in equivariant representations, establishing principled design guidelines. Experiments show that (i) in molecular graph generation, our method achieves lossless invariant compression; (ii) in image classification, random invariant projections significantly improve generalization; and (iii) the approach seamlessly integrates with standard data-augmentation-based training pipelines. Our framework provides both theoretical foundations and practical tools for enhancing the interpretability, compressibility, and downstream adaptability of equivariant representations.

Technology Category

Application Category

📝 Abstract
Latent representations are used extensively for downstream tasks, such as visualization, interpolation or feature extraction of deep learning models. Invariant and equivariant neural networks are powerful and well-established models for enforcing inductive biases. In this paper, we demonstrate that the inductive bias imposed on the by an equivariant model must also be taken into account when using latent representations. We show how not accounting for the inductive biases leads to decreased performance on downstream tasks, and vice versa, how accounting for inductive biases can be done effectively by using an invariant projection of the latent representations. We propose principles for how to choose such a projection, and show the impact of using these principles in two common examples: First, we study a permutation equivariant variational auto-encoder trained for molecule graph generation; here we show that invariant projections can be designed that incur no loss of information in the resulting invariant representation. Next, we study a rotation-equivariant representation used for image classification. Here, we illustrate how random invariant projections can be used to obtain an invariant representation with a high degree of retained information. In both cases, the analysis of invariant latent representations proves superior to their equivariant counterparts. Finally, we illustrate that the phenomena documented here for equivariant neural networks have counterparts in standard neural networks where invariance is encouraged via augmentation. Thus, while these ambiguities may be known by experienced developers of equivariant models, we make both the knowledge as well as effective tools to handle the ambiguities available to the broader community.
Problem

Research questions and friction points this paper is trying to address.

Equivariant Neural Networks
Invariant and Equivariant Features
Molecular Graph Generation and Image Recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Invariant Perspective
Equivariant Models
Latent Features
A
Andreas Abildtrup Hansen
Department of Visual Computing, Technical University of Denmark, Kgs. Lyngby, Denmark
A
Anna Calissano
INRIA d’Université Côte d’Azur, France; Department of Mathematics, Imperial College London, London, England
Aasa Feragen
Aasa Feragen
Professor, DTU Compute
Machine learningmedical imaginggeometric modelling