Capsule Network Projectors are Equivariant and Invariant Learners

📅 2024-05-23
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of jointly modeling invariance and equivariance in self-supervised learning. We propose CapsIE—the first architecture to systematically integrate capsule networks (CapsNets) into equivariant self-supervised learning. Its core innovations are: (1) leveraging CapsNets’ intrinsic pose vectors to explicitly encode both class-level invariance and geometric equivariance under transformations; and (2) introducing an entropy-minimization-based joint objective for parameter-efficient, architecture-agnostic co-learning of invariant and equivariant representations. On the 3DIEBench rotation benchmark, CapsIE achieves state-of-the-art performance in equivariant self-supervised learning—matching fully supervised models—and demonstrates strong generalization across multi-task, large-scale scenarios.

Technology Category

Application Category

📝 Abstract
Learning invariant representations has been the longstanding approach to self-supervised learning. However, recently progress has been made in preserving equivariant properties in representations, yet do so with highly prescribed architectures. In this work, we propose an invariant-equivariant self-supervised architecture that employs Capsule Networks (CapsNets) which have been shown to capture equivariance with respect to novel viewpoints. We demonstrate that the use of CapsNets in equivariant self-supervised architectures achieves improved downstream performance on equivariant tasks with higher efficiency and fewer network parameters. To accommodate the architectural changes of CapsNets, we introduce a new objective function based on entropy minimisation. This approach which we name CapsIE (Capsule Invariant Equivariant Network) achieves state-of-the-art performance on the equivariant rotation tasks on the 3DIEBench dataset compared to prior equivariant SSL methods, while performing competitively against supervised counterparts. Our results demonstrate the ability of CapsNets to learn complex and generalised representations for large-scale, multi-task datasets compared to previous CapsNet benchmarks. Code is available at https://github.com/AberdeenML/CapsIE.
Problem

Research questions and friction points this paper is trying to address.

Learning invariant and equivariant representations with Capsule Networks
Improving efficiency and reducing parameters in equivariant self-supervised learning
Achieving state-of-the-art performance on 3D rotation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Capsule Networks for equivariant self-supervised learning
Introduces entropy minimization as new objective function
Achieves efficient performance with fewer network parameters
🔎 Similar Papers
2024-01-23International Conference on Machine LearningCitations: 1
M
Miles Everett
Department of Computing Science, University of Aberdeen, UK
A
A. Durrant
Department of Computing Science, University of Aberdeen, UK
Mingjun Zhong
Mingjun Zhong
Department of Computing Science, University of Aberdeen, UK
Applied StatisticsMachine Learning
G
G. Leontidis
Department of Computing Science, University of Aberdeen, UK; Interdisciplinary Centre for Data and AI, University of Aberdeen, UK