Real-Time Image Segmentation via Hybrid Convolutional-Transformer Architecture Search

๐Ÿ“… 2024-03-15
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 4
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational cost of multi-head self-attention and its inefficient integration with convolutional operations in high-resolution image segmentation, this paper proposes a multi-objective, multi-resolution hybrid architecture search method. We construct a scalable multi-branch supernet that enables joint localization and fusion of lightweight convolutional modules and memory-optimized high-resolution self-attention modules across all feature scalesโ€”marking the first such unified design. Our approach employs single-shot neural architecture search to directly yield Pareto-optimal hybrid architectures. Evaluated on benchmarks including Cityscapes, the method achieves state-of-the-art performance in both semantic and panoptic segmentation, delivering significant Pareto improvements in mIoU versus inference latency. These results empirically validate the effectiveness and practicality of co-designing convolutional and Transformer-based components for high-resolution segmentation.

Technology Category

Application Category

๐Ÿ“ Abstract
Image segmentation is one of the most fundamental problems in computer vision and has drawn a lot of attention due to its vast applications in image understanding and autonomous driving. However, designing effective and efficient segmentation neural architectures is a labor-intensive process that may require numerous trials by human experts. In this paper, we address the challenge of integrating multi-head self-attention into high-resolution representation CNNs efficiently by leveraging architecture search. Manually replacing convolution layers with multi-head self-attention is non-trivial due to the costly overhead in memory to maintain high resolution. By contrast, we develop a multi-target multi-branch supernet method, which not only fully utilizes the advantages of high-resolution features but also finds the proper location for placing the multi-head self-attention module. Our search algorithm is optimized towards multiple objectives (e.g., latency and mIoU) and is capable of finding architectures on the Pareto frontier with an arbitrary number of branches in a single search. We further present a series of models via the Hybrid Convolutional-Transformer Architecture Search (HyCTAS) method that searches for the best hybrid combination of light-weight convolution layers and memory-efficient self-attention layers between branches from different resolutions and fuses them to high resolution for both efficiency and effectiveness. Extensive experiments demonstrate that HyCTAS outperforms previous methods in both semantic segmentation and panoptic segmentation tasks. Code and models are available at https://github.com/MarvinYu1995/HyCTAS.
Problem

Research questions and friction points this paper is trying to address.

Efficiently integrating self-attention into high-resolution CNNs
Automating neural architecture design for image segmentation
Balancing latency and accuracy in segmentation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Convolutional-Transformer Architecture Search (HyCTAS)
Multi-target multi-branch supernet method
Optimized search for light-weight and memory-efficient layers
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Hongyuan Yu
Multimedia Department, Xiaomi Inc., Beijing 100085, China; University of Chinese Academy of Sciences, Beijing 101408, China
Cheng Wan
Cheng Wan
Georgia Institute of Technology
Mengchen Liu
Mengchen Liu
META
Visual AnalyticsMachine Learning
D
Dongdong Chen
Microsoft
Bin Xiao
Bin Xiao
Meta GenAI
Computer VisionVision and LanguageMachine LearningHuman Pose Estimation
Xiyang Dai
Xiyang Dai
Microsoft
Computer VisionDeep Learning