Multiple Queries with Multiple Keys: A Precise Prompt Matching Paradigm for Prompt-based Continual Learning

📅 2025-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address knowledge skew and catastrophic forgetting caused by inaccurate prompt selection in prompt-based continual learning, this paper proposes a Multi-Query Multi-Key (MQMK) matching paradigm. Our method introduces task-aware query vectors and a distribution-aware key encoder that jointly models class-prototype means and feature covariances—marking the first effort to explicitly align training data distributions as the core criterion for prompt selection. By synergistically combining broad-scale retrieval via multiple queries and fine-grained matching via multiple keys, MQMK achieves high-precision alignment between test samples and historical task distributions. Evaluated on three major continual learning benchmarks, MQMK establishes new state-of-the-art performance; under challenging scenarios, it improves prompt matching accuracy by over 30%. The implementation will be publicly released.

Technology Category

Application Category

📝 Abstract
Continual learning requires machine learning models to continuously acquire new knowledge in dynamic environments while avoiding the forgetting of previous knowledge. Prompt-based continual learning methods effectively address the issue of catastrophic forgetting through prompt expansion and selection. However, existing approaches often suffer from low accuracy in prompt selection, which can result in the model receiving biased knowledge and making biased predictions. To address this issue, we propose the Multiple Queries with Multiple Keys (MQMK) prompt matching paradigm for precise prompt selection. The goal of MQMK is to select the prompts whose training data distribution most closely matches that of the test sample. Specifically, Multiple Queries enable precise breadth search by introducing task-specific knowledge, while Multiple Keys perform deep search by representing the feature distribution of training samples at a fine-grained level. Experiments show that MQMK enhances the prompt matching rate by over 30% in challenging scenarios and achieves state-of-the-art performance on three widely adopted continual learning benchmarks. Once this paper is accepted, we will release the code.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Learning Bias
Knowledge Retention
Innovation

Methods, ideas, or system contributions that make the work stand out.

MQMK method
Multiple Queries and Keys
Prompt Matching Accuracy
🔎 Similar Papers
No similar papers found.
D
Dunwei Tu
National Key Laboratory for Novel Software Technology, Nanjing University, China; School of Artificial Intelligence, Nanjing University, China
H
Huiyu Yi
National Key Laboratory for Novel Software Technology, Nanjing University, China; School of Artificial Intelligence, Nanjing University, China
Yuchi Wang
Yuchi Wang
CUHK MMLab; Peking Uninversity
MultimodalityVLMGenerative Models
B
Baile Xu
National Key Laboratory for Novel Software Technology, Nanjing University, China; School of Artificial Intelligence, Nanjing University, China
J
Jian Zhao
School of Electronic Science and Engineering, Nanjing University, China
Furao Shen
Furao Shen
Department of Computer Science & Technology, Nanjing University
Neural NetworksRobotic Intelligence