Memory-Integrated Reconfigurable Adapters: A Unified Framework for Settings with Multiple Tasks

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of jointly achieving domain generalization and continual learning in multi-task settings, where catastrophic forgetting commonly occurs. We propose MIRA, a unified framework that integrates Hopfield-style associative memory into adapter architectures. Its core innovation lies in a reconfigurable shared backbone coupled with sample-level dynamic retrieval, enabled by post-training key learning and affine composition-based updates for task-adaptive modulation. This design emulates neuromodulatory regulation of a single neural circuit, facilitating rapid task switching and persistent knowledge retention. On standard benchmarks, MIRA achieves state-of-the-art out-of-distribution accuracy for domain generalization while significantly outperforming dedicated continual learning methods. Notably, it demonstrates superior knowledge retention across incremental tasks, effectively mitigating catastrophic forgetting without compromising generalization.

Technology Category

Application Category

📝 Abstract
Organisms constantly pivot between tasks such as evading predators, foraging, traversing rugged terrain, and socializing, often within milliseconds. Remarkably, they preserve knowledge of once-learned environments sans catastrophic forgetting, a phenomenon neuroscientists hypothesize, is due to a singular neural circuitry dynamically overlayed by neuromodulatory agents such as dopamine and acetylcholine. In parallel, deep learning research addresses analogous challenges via domain generalization (DG) and continual learning (CL), yet these methods remain siloed, despite the brains ability to perform them seamlessly. In particular, prior work has not explored architectures involving associative memories (AMs), which are an integral part of biological systems, to jointly address these tasks. We propose Memory-Integrated Reconfigurable Adapters (MIRA), a unified framework that integrates Hopfield-style associative memory modules atop a shared backbone. Associative memory keys are learned post-hoc to index and retrieve an affine combination of stored adapter updates for any given task or domain on a per-sample basis. By varying only the task-specific objectives, we demonstrate that MIRA seamlessly accommodates domain shifts and sequential task exposures under one roof. Empirical evaluations on standard benchmarks confirm that our AM-augmented architecture significantly enhances adaptability and retention: in DG, MIRA achieves SoTA out-of-distribution accuracy, and in incremental learning settings, it outperforms architectures explicitly designed to handle catastrophic forgetting using generic CL algorithms. By unifying adapter-based modulation with biologically inspired associative memory, MIRA delivers rapid task switching and enduring knowledge retention in a single extensible architecture, charting a path toward more versatile and memory-augmented AI systems.
Problem

Research questions and friction points this paper is trying to address.

Unify domain generalization and continual learning in a single framework
Integrate associative memory to prevent catastrophic forgetting in AI
Enable rapid task switching and enduring knowledge retention in AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Hopfield-style associative memory modules
Learns associative memory keys for per-sample retrieval
Unifies domain generalization and continual learning adapters
🔎 Similar Papers
No similar papers found.
Susmit Agrawal
Susmit Agrawal
PhD Candidate at IMPRS-IS
NeuroAIDeep LearningComputer Vision
K
Krishn Vishwas Kher
IIT Hyderabad
S
Saksham Mittal
IIT Hyderabad
S
Swarnim Maheshwari
IIT Hyderabad
V
Vineeth N. Balasubramanian
IIT Hyderabad, Microsoft Research, India