🤖 AI Summary
To address the high computational cost, excessive memory consumption, and slow inference of Vision Transformers in high-resolution image and long-video modeling, this paper proposes GSPN-2: an algorithm-system co-optimized line-scan propagation architecture. Methodologically, it introduces (1) a unified 2D GPU kernel replacing thousands of micro-kernel launches to eliminate launch overhead; (2) a channel-shared compact propagation strategy that reduces redundant parameters and computation; and (3) synergistic optimizations—including column-wise activation caching in shared memory, warp-level channel slicing scheduling, and structured matrix transformations—to achieve near-linear-complexity global spatial modeling. Evaluated on image classification and text-to-image generation tasks, GSPN-2 matches Transformer-level accuracy while reducing GPU memory usage by 42%, accelerating inference by 3.1×, and cutting FLOPs by 67%.
📝 Abstract
Efficient vision transformer remains a bottleneck for high-resolution images and long-video related real-world applications. Generalized Spatial Propagation Network (GSPN) addresses this by replacing quadratic self-attention with a line-scan propagation scheme, bringing the cost close to linear in the number of rows or columns, while retaining accuracy. Despite this advancement, the existing GSPN implementation still suffers from (i) heavy overhead due to repeatedly launching GPU kernels, (ii) excessive data transfers from global GPU memory, and (iii) redundant computations caused by maintaining separate propagation weights for each channel. We introduce GSPN-2, a joint algorithm-system redesign. In particular, we eliminate thousands of micro-launches from the previous implementation into one single 2D kernel, explicitly pin one warp to each channel slice, and stage the previous column's activations in shared memory. On the model side, we introduce a compact channel propagation strategy that replaces per-channel matrices, trimming parameters, and align naturally with the affinity map used in transformer attention. Experiments demonstrate GSPN-2's effectiveness across image classification and text-to-image synthesis tasks, matching transformer-level accuracy with significantly lower computational cost. GSPN-2 establishes a new efficiency frontier for modeling global spatial context in vision applications through its unique combination of structured matrix transformations and GPU-optimized implementation. Project page: https://whj363636.github.io/GSPN2/