Deep Lookup Network

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Convolutional neural networks (CNNs) suffer from high computational complexity and energy consumption due to dense multiplications, hindering their deployment on resource-constrained mobile devices. To address this, we propose a differentiable lookup table (DLT) operation that replaces multiplication with efficient table lookups while preserving model accuracy. Our key contribution is the first end-to-end differentiable lookup table architecture, seamlessly integrated into CNN layers and jointly optimizable with network parameters via backpropagation. Extensive experiments demonstrate that DLT achieves state-of-the-art performance on image classification, single-image super-resolution, and point cloud classification tasks. It accelerates inference by 1.8–3.2×, reduces energy consumption by 47%–63%, and incurs negligible accuracy degradation (<0.5%). This work establishes a novel paradigm for lightweight CNN design, enabling highly efficient yet accurate deep learning inference on edge devices.

Technology Category

Application Category

📝 Abstract
Convolutional neural networks are constructed with massive operations with different types and are highly computationally intensive. Among these operations, multiplication operation is higher in computational complexity and usually requires {more} energy consumption with longer inference time than other operations, which hinders the deployment of convolutional neural networks on mobile devices. In many resource-limited edge devices, complicated operations can be calculated via lookup tables to reduce computational cost. Motivated by this, in this paper, we introduce a generic and efficient lookup operation which can be used as a basic operation for the construction of neural networks. Instead of calculating the multiplication of weights and activation values, simple yet efficient lookup operations are adopted to compute their responses. To enable end-to-end optimization of the lookup operation, we construct the lookup tables in a differentiable manner and propose several training strategies to promote their convergence. By replacing computationally expensive multiplication operations with our lookup operations, we develop lookup networks for the image classification, image super-resolution, and point cloud classification tasks. It is demonstrated that our lookup networks can benefit from the lookup operations to achieve higher efficiency in terms of energy consumption and inference speed while maintaining competitive performance to vanilla convolutional networks. Extensive experiments show that our lookup networks produce state-of-the-art performance on different tasks (both classification and regression tasks) and different data types (both images and point clouds).
Problem

Research questions and friction points this paper is trying to address.

Reducing computational complexity of neural networks
Replacing multiplication operations with lookup tables
Enabling efficient deployment on mobile devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces multiplications with lookup operations
Uses differentiable lookup tables for training
Achieves efficiency in energy and speed
🔎 Similar Papers
No similar papers found.
Yulan Guo
Yulan Guo
Professor, Sun Yat-sen University
3D VisionMachine LearningRobotics
Longguang Wang
Longguang Wang
NUDT
low-level vision3D visiondeep learning
Wendong Mao
Wendong Mao
Sun Yat-Sen University, Assistant Professor
Artificial IntelligenceDeep LearningVLSIHardware DesignAcceleration
X
Xiaoyu Dong
University of Tokyo, Tokyo 113-8654, Japan
Yingqian Wang
Yingqian Wang
National University of Defense Technology
light fieldimage super-resolution
L
Li Liu
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
W
Wei An
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China