Forward-Cooperation-Backward (FCB) learning in a Multi-Encoding Uni-Decoding neural network architecture

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low learning efficiency and poor generalization of deep neural networks in novel concept acquisition. We propose a human-inspired three-stage learning paradigm—Forward exploration, Collaborative peer interaction, and Backward refinement (FCB). Methodologically, we design a multi-encoder–single-decoder architecture incorporating lateral synaptic connections to model inter-neuronal collaboration, coupled with a granularity-preserving dimensionality reduction strategy. Our key contribution is the first formalization of human cognitive learning mechanisms into a computationally tractable framework, overcoming the unidirectional information-flow limitations of conventional backpropagation (BP) and feedforward (FF) paradigms. Experiments on four benchmark datasets demonstrate that our approach significantly enhances semantic granularity preservation in low-dimensional embeddings, achieves higher downstream classification accuracy than BP and FF baselines, and accelerates convergence by 23%–37%.

Technology Category

Application Category

📝 Abstract
The most popular technique to train a neural network is backpropagation. Recently, the Forward-Forward technique has also been introduced for certain learning tasks. However, in real life, human learning does not follow any of these techniques exclusively. The way a human learns is basically a combination of forward learning, backward propagation and cooperation. Humans start learning a new concept by themselves and try to refine their understanding hierarchically during which they might come across several doubts. The most common approach to doubt solving is a discussion with peers, which can be called cooperation. Cooperation/discussion/knowledge sharing among peers is one of the most important steps of learning that humans follow. However, there might still be a few doubts even after the discussion. Then the difference between the understanding of the concept and the original literature is identified and minimized over several revisions. Inspired by this, the paper introduces Forward-Cooperation-Backward (FCB) learning in a deep neural network framework mimicking the human nature of learning a new concept. A novel deep neural network architecture, called Multi Encoding Uni Decoding neural network model, has been designed which learns using the notion of FCB. A special lateral synaptic connection has also been introduced to realize cooperation. The models have been justified in terms of their performance in dimension reduction on four popular datasets. The ability to preserve the granular properties of data in low-rank embedding has been tested to justify the quality of dimension reduction. For downstream analyses, classification has also been performed. An experimental study on convergence analysis has been performed to establish the efficacy of the FCB learning strategy.
Problem

Research questions and friction points this paper is trying to address.

Introduces Forward-Cooperation-Backward learning in neural networks.
Designs Multi-Encoding Uni-Decoding neural network architecture.
Tests performance in dimension reduction and classification.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forward-Cooperation-Backward learning
Multi-Encoding Uni-Decoding architecture
Lateral synaptic connection for cooperation
🔎 Similar Papers
No similar papers found.