Spectral State Space Model for Rotation-Invariant~Visual~Representation~Learning

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual state-space models (SSMs) suffer from two key limitations: (1) reliance on predefined scanning orders, which hinders modeling of semantic relationships among non-adjacent image patches, and (2) sensitivity to geometric transformations such as rotation. To address these issues, we propose a rotation-invariant global representation learning framework. First, we construct a global relational graph among image patches via spectral decomposition of the graph Laplacian, thereby transcending local scanning constraints. Second, we introduce the Rotational Feature Normalizer (RFN), which enforces strict rotation invariance through frequency-domain feature normalization. Our method preserves linear computational complexity and enables real-time inference. Extensive experiments demonstrate that it significantly outperforms state-of-the-art visual SSMs—including VMamba—on image classification benchmarks, while exhibiting robustness to arbitrary in-plane rotations.

Technology Category

Application Category

📝 Abstract
State Space Models (SSMs) have recently emerged as an alternative to Vision Transformers (ViTs) due to their unique ability of modeling global relationships with linear complexity. SSMs are specifically designed to capture spatially proximate relationships of image patches. However, they fail to identify relationships between conceptually related yet not adjacent patches. This limitation arises from the non-causal nature of image data, which lacks inherent directional relationships. Additionally, current vision-based SSMs are highly sensitive to transformations such as rotation. Their predefined scanning directions depend on the original image orientation, which can cause the model to produce inconsistent patch-processing sequences after rotation. To address these limitations, we introduce Spectral VMamba, a novel approach that effectively captures the global structure within an image by leveraging spectral information derived from the graph Laplacian of image patches. Through spectral decomposition, our approach encodes patch relationships independently of image orientation, achieving rotation invariance with the aid of our Rotational Feature Normalizer (RFN) module. Our experiments on classification tasks show that Spectral VMamba outperforms the leading SSM models in vision, such as VMamba, while maintaining invariance to rotations and a providing a similar runtime efficiency.
Problem

Research questions and friction points this paper is trying to address.

Captures global image structure using spectral information
Achieves rotation invariance in visual representation learning
Improves classification performance over existing SSM models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages spectral information for global structure capture
Introduces Rotational Feature Normalizer for rotation invariance
Encodes patch relationships independent of image orientation
🔎 Similar Papers
No similar papers found.