๐ค AI Summary
To address the high computational cost of multi-head self-attention and its inefficient integration with convolutional operations in high-resolution image segmentation, this paper proposes a multi-objective, multi-resolution hybrid architecture search method. We construct a scalable multi-branch supernet that enables joint localization and fusion of lightweight convolutional modules and memory-optimized high-resolution self-attention modules across all feature scalesโmarking the first such unified design. Our approach employs single-shot neural architecture search to directly yield Pareto-optimal hybrid architectures. Evaluated on benchmarks including Cityscapes, the method achieves state-of-the-art performance in both semantic and panoptic segmentation, delivering significant Pareto improvements in mIoU versus inference latency. These results empirically validate the effectiveness and practicality of co-designing convolutional and Transformer-based components for high-resolution segmentation.
๐ Abstract
Image segmentation is one of the most fundamental problems in computer vision and has drawn a lot of attention due to its vast applications in image understanding and autonomous driving. However, designing effective and efficient segmentation neural architectures is a labor-intensive process that may require numerous trials by human experts. In this paper, we address the challenge of integrating multi-head self-attention into high-resolution representation CNNs efficiently by leveraging architecture search. Manually replacing convolution layers with multi-head self-attention is non-trivial due to the costly overhead in memory to maintain high resolution. By contrast, we develop a multi-target multi-branch supernet method, which not only fully utilizes the advantages of high-resolution features but also finds the proper location for placing the multi-head self-attention module. Our search algorithm is optimized towards multiple objectives (e.g., latency and mIoU) and is capable of finding architectures on the Pareto frontier with an arbitrary number of branches in a single search. We further present a series of models via the Hybrid Convolutional-Transformer Architecture Search (HyCTAS) method that searches for the best hybrid combination of light-weight convolution layers and memory-efficient self-attention layers between branches from different resolutions and fuses them to high resolution for both efficiency and effectiveness. Extensive experiments demonstrate that HyCTAS outperforms previous methods in both semantic segmentation and panoptic segmentation tasks. Code and models are available at https://github.com/MarvinYu1995/HyCTAS.