🤖 AI Summary
This work addresses the limitations of conventional prompt-based continual learning methods, which rely on key-value pair structures and suffer from inter-task interference and limited scalability. To overcome these issues, the authors propose a Prompt-and-Prototype (ProP) mechanism that dispenses with key-value pairs entirely. Instead, task-specific prompts guide feature learning, while input representations are encoded as prototypes; during inference, predictions are made by dynamically binding prompts to their corresponding prototypes. The method further incorporates prompt initialization regularization to enhance training stability. Extensive experiments on multiple standard continual learning benchmarks demonstrate that ProP substantially mitigates task interference, improves model scalability, and consistently outperforms existing approaches, thereby validating its effectiveness and superiority.