Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation

📅 2024-06-17
🏛️ 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Instance segmentation on edge devices faces challenges in balancing accuracy, model size, and hardware efficiency. Method: This paper proposes a co-optimization framework for binary-weight neural networks (BNNs), comprising (i) a dual-mode ASIC inference engine specialized for BNNs—supporting inference via bitwise operations and additions only—and (ii) a lightweight fused architecture integrating SegNeXt and SparseInst. Contribution/Results: We introduce the first dual-mode BNN inference architecture and achieve a hardware implementation requiring only 52% of the area of conventional MAC-based units. Evaluated on the Person class, our method surpasses YOLACT in accuracy while reducing model size to 1/77.7 and cutting hardware resource usage by 48%. The design significantly improves energy efficiency and deployment feasibility for edge instance segmentation.

Technology Category

Application Category

📝 Abstract
Binary-weight Neural Networks (BNNs), in which weights are binarized and activations are quantized, are employed to reduce computational costs of various kinds of applications. In this paper, a design methodology of hardware architecture for inference engines is proposed to handle modern BNNs with two operation modes. Multiply-Accumulate (MAC) operations can be simplified by replacing multiply operations with bitwise operations. The proposed method can effectively reduce the gate count of inference engines by removing a part of computational costs from the hardware system. The architecture of MAC operations can calculate the inference results of BNNs efficiently with only 52% of hardware costs compared with the related works. To show that the inference engine can handle practical applications, two lightweight networks which combine the backbones of SegNeXt and the decoder of SparseInst for instance segmentation are also proposed. The output results of the lightweight networks are computed using only bitwise operations and add operations. The proposed inference engine has lower hardware costs than related works. The experimental results show that the proposed inference engine can handle the proposed instance-segmentation networks and achieves higher accuracy than YOLACT on the "Person" category although the model size is 77.7× smaller compared with YOLACT.
Problem

Research questions and friction points this paper is trying to address.

Binary-Weight Neural Networks
Object Segmentation
Embedded Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Binary-Weight Neural Networks
Hardware Architecture
Efficient Inference
🔎 Similar Papers
No similar papers found.
Tse-Wei Chen
Tse-Wei Chen
Canon Inc.
Signal ProcessingImage ProccessingPattern RecognitionVLSI Design
Wei Tao
Wei Tao
Huazhong University of Science and Technology
QuantizationLLMTime-Series
D
Dongyue Zhao
Canon Innovative Solution (Beijing) Co., Ltd., 12A Floor, Yingu Building, No.9 Beisihuanxi Road, Haidian, Beijing, China
K
Kazuhiro Mima
Device Technology Development Headquarters, Canon Inc., 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo 146-8501, Japan
T
Tadayuki Ito
Device Technology Development Headquarters, Canon Inc., 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo 146-8501, Japan
K
Kinya Osa
Device Technology Development Headquarters, Canon Inc., 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo 146-8501, Japan
M
Masami Kato
Device Technology Development Headquarters, Canon Inc., 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo 146-8501, Japan