Beyond MACs: Hardware Efficient Architecture Design for Vision Backbones

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant discrepancy between traditional MACs-based efficiency metrics for vision backbones and actual inference latency on edge devices, which hinders hardware-efficient design. By analyzing the divergence between theoretical MACs and real-world execution times of common building blocks, the study identifies key factors governing hardware efficiency. It proposes LowFormer, a novel backbone featuring the lightweight Lowtention module as a replacement for multi-head self-attention. Through hardware-aware co-design of macro- and micro-architectures alongside cross-platform deployment optimizations, LowFormer achieves higher ImageNet accuracy while substantially outperforming state-of-the-art models in speed across diverse hardware platforms—including both edge and desktop GPUs—and demonstrates strong performance on downstream tasks such as classification, detection, and segmentation.
📝 Abstract
Vision backbone networks play a central role in modern computer vision. Enhancing their efficiency directly benefits a wide range of downstream applications. To measure efficiency, many publications rely on MACs (Multiply Accumulate operations) as a predictor of execution time. In this paper, we experimentally demonstrate the shortcomings of such a metric, especially in the context of edge devices. By contrasting the MAC count and execution time of common architectural design elements, we identify key factors for efficient execution and provide insights to optimize backbone design. Based on these insights, we present LowFormer, a novel vision backbone family. LowFormer features a streamlined macro and micro design that includes Lowtention, a lightweight alternative to Multi-Head Self-Attention. Lowtention not only proves more efficient, but also enables superior results on ImageNet. Additionally, we present an edge GPU version of LowFormer, that can further improve upon its baseline's speed on edge GPU and desktop GPU. We demonstrate LowFormer's wide applicability by evaluating it on smaller image classification datasets, as well as adapting it to several downstream tasks, such as object detection, semantic segmentation, image retrieval, and visual object tracking. LowFormer models consistently achieve remarkable speed-ups across various hardware platforms compared to recent state-of-the-art backbones. Code and models are available at https://github.com/altair199797/LowFormer/blob/main/Beyond_MACs.md.
Problem

Research questions and friction points this paper is trying to address.

vision backbone
hardware efficiency
MACs
edge devices
execution time
Innovation

Methods, ideas, or system contributions that make the work stand out.

LowFormer
Lowtention
hardware-efficient architecture
beyond MACs
vision backbone
🔎 Similar Papers
No similar papers found.