🤖 AI Summary
Existing self-supervised learning (SSL) methods overemphasize feature invariance while neglecting geometric equivariance—a critical property for visual perception. Method: We propose a novel SSL framework that guides models to learn equivariantly consistent representations by reconstructing intermediate images transformed via unknown geometric operations (e.g., translation, rotation), without prescribing transformation types or imposing explicit equivariance constraints. Contribution/Results: This work introduces, for the first time, intermediate transformation image reconstruction as an auxiliary task to jointly optimize invariance and equivariance. We design a dual-branch feature disentanglement architecture that jointly minimizes reconstruction loss and standard SSL losses—fully compatible with state-of-the-art frameworks such as iBOT and DINOv2. Extensive experiments on synthetic and real-world datasets demonstrate consistent, superior performance across downstream classification, detection, and segmentation tasks, outperforming all SOTA baselines.
📝 Abstract
The equivariant behaviour of features is essential in many computer vision tasks, yet popular self-supervised learning (SSL) methods tend to constrain equivariance by design. We propose a self-supervised learning approach where the system learns transformations independently by reconstructing images that have undergone previously unseen transformations. Specifically, the model is tasked to reconstruct intermediate transformed images, e.g. translated or rotated images, without prior knowledge of these transformations. This auxiliary task encourages the model to develop equivariance-coherent features without relying on predefined transformation rules. To this end, we apply transformations to the input image, generating an image pair, and then split the extracted features into two sets per image. One set is used with a usual SSL loss encouraging invariance, the other with our loss based on the auxiliary task to reconstruct the intermediate transformed images. Our loss and the SSL loss are linearly combined with weighted terms. Evaluating on synthetic tasks with natural images, our proposed method strongly outperforms all competitors, regardless of whether they are designed to learn equivariance. Furthermore, when trained alongside augmentation-based methods as the invariance tasks, such as iBOT or DINOv2, we successfully learn a balanced combination of invariant and equivariant features. Our approach performs strong on a rich set of realistic computer vision downstream tasks, almost always improving over all baselines.