Proximal Vision Transformer: Enhancing Feature Representation through Two-Stage Manifold Geometry

📅 2025-08-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) excel at modeling intra-image local relationships but struggle to capture inter-sample global geometric structures. To address this, we propose a novel ViT enhancement framework that integrates manifold geometry with proximal optimization. First, we reinterpret self-attention as a multi-view geometric representation on the tangent bundle of the data manifold, thereby constructing its tangent space structure. Second, we introduce a differentiable proximal iterative algorithm that aligns features across samples and optimizes global geometry directly on the tangent bundle. This work establishes, for the first time, a rigorous theoretical connection between ViT self-attention and the differential-geometric concept of tangent bundles. Crucially, the entire framework is end-to-end trainable via differentiable proximal optimization. Empirical evaluation on image classification demonstrates significant improvements in both accuracy and feature distribution quality, validating the efficacy of explicit global geometric modeling for visual representation learning.

Technology Category

Application Category

📝 Abstract
The Vision Transformer (ViT) architecture has become widely recognized in computer vision, leveraging its self-attention mechanism to achieve remarkable success across various tasks. Despite its strengths, ViT's optimization remains confined to modeling local relationships within individual images, limiting its ability to capture the global geometric relationships between data points. To address this limitation, this paper proposes a novel framework that integrates ViT with the proximal tools, enabling a unified geometric optimization approach to enhance feature representation and classification performance. In this framework, ViT constructs the tangent bundle of the manifold through its self-attention mechanism, where each attention head corresponds to a tangent space, offering geometric representations from diverse local perspectives. Proximal iterations are then introduced to define sections within the tangent bundle and project data from tangent spaces onto the base space, achieving global feature alignment and optimization. Experimental results confirm that the proposed method outperforms traditional ViT in terms of classification accuracy and data distribution.
Problem

Research questions and friction points this paper is trying to address.

Enhancing ViT's global geometric relationship capture
Integrating proximal tools for unified geometric optimization
Improving feature representation and classification performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating ViT with proximal tools for unified optimization
Using self-attention to construct manifold tangent bundle geometry
Employing proximal iterations for global feature alignment
🔎 Similar Papers
No similar papers found.