🤖 AI Summary
Vision Transformers (ViTs) excel at modeling intra-image local relationships but struggle to capture inter-sample global geometric structures. To address this, we propose a novel ViT enhancement framework that integrates manifold geometry with proximal optimization. First, we reinterpret self-attention as a multi-view geometric representation on the tangent bundle of the data manifold, thereby constructing its tangent space structure. Second, we introduce a differentiable proximal iterative algorithm that aligns features across samples and optimizes global geometry directly on the tangent bundle. This work establishes, for the first time, a rigorous theoretical connection between ViT self-attention and the differential-geometric concept of tangent bundles. Crucially, the entire framework is end-to-end trainable via differentiable proximal optimization. Empirical evaluation on image classification demonstrates significant improvements in both accuracy and feature distribution quality, validating the efficacy of explicit global geometric modeling for visual representation learning.
📝 Abstract
The Vision Transformer (ViT) architecture has become widely recognized in computer vision, leveraging its self-attention mechanism to achieve remarkable success across various tasks. Despite its strengths, ViT's optimization remains confined to modeling local relationships within individual images, limiting its ability to capture the global geometric relationships between data points. To address this limitation, this paper proposes a novel framework that integrates ViT with the proximal tools, enabling a unified geometric optimization approach to enhance feature representation and classification performance. In this framework, ViT constructs the tangent bundle of the manifold through its self-attention mechanism, where each attention head corresponds to a tangent space, offering geometric representations from diverse local perspectives. Proximal iterations are then introduced to define sections within the tangent bundle and project data from tangent spaces onto the base space, achieving global feature alignment and optimization. Experimental results confirm that the proposed method outperforms traditional ViT in terms of classification accuracy and data distribution.