BOLD: Boolean Logic Deep Learning

📅 2024-05-25
🏛️ Neural Information Processing Systems
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning training incurs high energy consumption, primarily due to floating-point arithmetic and frequent data movement. Method: This work introduces the first end-to-end training paradigm operating entirely in the Boolean domain—eliminating gradient descent and floating-point computation—and establishes a trainable deep network grounded in Boolean variational principles. It integrates Boolean logic optimization, discrete-domain training algorithms, and hardware-software co-design, underpinned by a unified energy-efficiency evaluation framework jointly considering architecture, memory hierarchy, and dataflow. Contribution/Results: The approach achieves full-precision baseline accuracy on ImageNet; surpasses state-of-the-art methods in semantic segmentation; delivers competitive performance in image super-resolution and Transformer-based language understanding; and significantly reduces both training and inference energy consumption. This work provides the first systematic, Boolean-native solution for energy-efficient AI training.

Technology Category

Application Category

📝 Abstract
Deep learning is computationally intensive, with significant efforts focused on reducing arithmetic complexity, particularly regarding energy consumption dominated by data movement. While existing literature emphasizes inference, training is considerably more resource-intensive. This paper proposes a novel mathematical principle by introducing the notion of Boolean variation such that neurons made of Boolean weights and inputs can be trained -- for the first time -- efficiently in Boolean domain using Boolean logic instead of gradient descent and real arithmetic. We explore its convergence, conduct extensively experimental benchmarking, and provide consistent complexity evaluation by considering chip architecture, memory hierarchy, dataflow, and arithmetic precision. Our approach achieves baseline full-precision accuracy in ImageNet classification and surpasses state-of-the-art results in semantic segmentation, with notable performance in image super-resolution, and natural language understanding with transformer-based models. Moreover, it significantly reduces energy consumption during both training and inference.
Problem

Research questions and friction points this paper is trying to address.

Reducing energy consumption in deep learning via Boolean logic
Training Boolean-based neurons efficiently without gradient descent
Achieving high accuracy while minimizing computational resource use
Innovation

Methods, ideas, or system contributions that make the work stand out.

Boolean weights and inputs for neurons
Training in Boolean domain using Boolean logic
Reduces energy consumption in training and inference
🔎 Similar Papers
No similar papers found.
V
Van Minh Nguyen
Mathematical and Algorithmic Sciences Laboratory, Huawei Paris Research Center, France
C
Cristian Ocampo-Blandon
Mathematical and Algorithmic Sciences Laboratory, Huawei Paris Research Center, France
A
Aymen Askri
Mathematical and Algorithmic Sciences Laboratory, Huawei Paris Research Center, France
Louis Leconte
Louis Leconte
Mathematical and Algorithmic Sciences Laboratory, Huawei Paris Research Center, France
Ba-Hien Tran
Ba-Hien Tran
Huawei Paris Research Center
Bayesian InferenceMachine LearningGenerative ModelsDeep LearningEfficient AI