🤖 AI Summary
This work addresses severe catastrophic forgetting and low energy efficiency in continual learning by proposing a human-inspired dynamic task interleaving mechanism. Methodologically, it introduces a novel dual-weighted task scheduling strategy—driven jointly by learning progress and energy consumption—and integrates quantitative learning-progress assessment, energy-aware scheduling, multi-task gradient coordination optimization, and simulation-based robotic environment modeling to emulate natural human learning rhythms. Compared with conventional sequential or uniformly distributed multi-task training paradigms, the proposed mechanism achieves a +12.7% improvement in average accuracy on multi-task robotic continual learning benchmarks, while reducing training energy consumption by 31.4%. These results empirically validate its dual advantages in knowledge retention and energy efficiency, establishing a new paradigm for ecologically sustainable, human-like intelligent learning.
📝 Abstract
Humans can continuously acquire new skills and knowledge by exploiting existing ones for improved learning, without forgetting them. Similarly, 'continual learning' in machine learning aims to learn new information while preserving the previously acquired knowledge. Existing research often overlooks the nature of human learning, where tasks are interleaved due to human choice or environmental constraints. So, almost never do humans master one task before switching to the next. To investigate to what extent human-like learning can benefit the learner, we propose a method that interleaves tasks based on their 'learning progress' and energy consumption. From a machine learning perspective, our approach can be seen as a multi-task learning system that balances learning performance with energy constraints while mimicking ecologically realistic human task learning. To assess the validity of our approach, we consider a robot learning setting in simulation, where the robot learns the effect of its actions in different contexts. The conducted experiments show that our proposed method achieves better performance than sequential task learning and reduces energy consumption for learning the tasks.