🤖 AI Summary
This work addresses the limited interpretability of self-attention mechanisms in Vision Transformers for monocular depth estimation. To this end, the authors propose a singular value decomposition (SVD)-inspired attention mechanism, termed SVDA, which establishes the first spectrally structured attention framework for dense prediction tasks. SVDA decouples directional alignment from spectral modulation and introduces a learnable diagonal matrix to model query-key interactions. Furthermore, six quantifiable spectral metrics are defined to elucidate attention behavior. Evaluated on the KITTI and NYU-v2 datasets, SVDA achieves comparable or slightly improved accuracy with negligible computational overhead and exhibits consistent attention organization patterns across datasets, substantially enhancing model interpretability.
📝 Abstract
Monocular depth estimation is a central problem in computer vision with applications in robotics, AR, and autonomous driving, yet the self-attention mechanisms that drive modern Transformer architectures remain opaque. We introduce SVD-Inspired Attention (SVDA) into the Dense Prediction Transformer (DPT), providing the first spectrally structured formulation of attention for dense prediction tasks. SVDA decouples directional alignment from spectral modulation by embedding a learnable diagonal matrix into normalized query-key interactions, enabling attention maps that are intrinsically interpretable rather than post-hoc approximations. Experiments on KITTI and NYU-v2 show that SVDA preserves or slightly improves predictive accuracy while adding only minor computational overhead. More importantly, SVDA unlocks six spectral indicators that quantify entropy, rank, sparsity, alignment, selectivity, and robustness. These reveal consistent cross-dataset and depth-wise patterns in how attention organizes during training, insights that remain inaccessible in standard Transformers. By shifting the role of attention from opaque mechanism to quantifiable descriptor, SVDA redefines interpretability in monocular depth estimation and opens a principled avenue toward transparent dense prediction models.