🤖 AI Summary
To address the high computational cost of k-nearest-neighbor (k-NN) graph construction and the loss of long-range dependencies due to fixed-step scaling in Vision Graph Networks (ViGs), this paper proposes LogViG—a multi-scale, high-resolution ViG architecture built upon Logarithmic Scalable Graph Construction (LSGC). LSGC employs nonlinear neighborhood expansion to mitigate over-compression and integrates a high-resolution branch with cross-scale feature fusion to significantly enhance global contextual modeling. The architecture adopts a hybrid CNN-GNN design and incorporates a sparse graph attention mechanism. On ImageNet-1K, the Ti-LogViG variant achieves 79.9% Top-1 accuracy with 24.3% fewer parameters and 35.3% lower GMACs compared to standard ViG baselines, outperforming state-of-the-art ViGs, CNNs, and ViTs across all metrics.
📝 Abstract
Vision graph neural networks (ViG) have demonstrated promise in vision tasks as a competitive alternative to conventional convolutional neural nets (CNN) and transformers (ViTs); however, common graph construction methods, such as k-nearest neighbor (KNN), can be expensive on larger images. While methods such as Sparse Vision Graph Attention (SVGA) have shown promise, SVGA's fixed step scale can lead to over-squashing and missing multiple connections to gain the same information that could be gained from a long-range link. Through this observation, we propose a new graph construction method, Logarithmic Scalable Graph Construction (LSGC) to enhance performance by limiting the number of long-range links. To this end, we propose LogViG, a novel hybrid CNN-GNN model that utilizes LSGC. Furthermore, inspired by the successes of multi-scale and high-resolution architectures, we introduce and apply a high-resolution branch and fuse features between our high-resolution and low-resolution branches for a multi-scale high-resolution Vision GNN network. Extensive experiments show that LogViG beats existing ViG, CNN, and ViT architectures in terms of accuracy, GMACs, and parameters on image classification and semantic segmentation tasks. Our smallest model, Ti-LogViG, achieves an average top-1 accuracy on ImageNet-1K of 79.9% with a standard deviation of 0.2%, 1.7% higher average accuracy than Vision GNN with a 24.3% reduction in parameters and 35.3% reduction in GMACs. Our work shows that leveraging long-range links in graph construction for ViGs through our proposed LSGC can exceed the performance of current state-of-the-art ViGs. Code is available at https://github.com/mmunir127/LogViG-Official.