Learning from Many and Adapting to the Unknown in Open-set Test Streams

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of large language models (LLMs) under continuously evolving tasks and non-stationary distribution shifts, where existing test-time adaptation methods struggle to balance source knowledge retention with adaptation reliability. The authors propose Synapse Consolidation (SyCo), the first approach to incorporate the Rac1 and MAPK pathways from Drosophila memory updating into LLM test-time adaptation. SyCo employs a dual-pathway architecture based on low-rank adapters: the Rac1 pathway constrains tail gradient subspaces to stabilize source knowledge, while the MAPK pathway enhances adaptation signal quality through hierarchical noise suppression. A structured objective function integrates problem understanding, process understanding, and source-domain safeguards. Evaluated across 18 NLP datasets under a multi-source open-set adaptation setting, SyCo achieves 78.31% and 85.37% accuracy on unseen tasks and data shifts, respectively, significantly outperforming strong baselines.
📝 Abstract
Large Language Models (LLMs) generalize across tasks via reusable representations and flexible reasoning, yet remain brittle in real deployment under evolving tasks and continual distribution shift. A common approach is Test-Time Adaptation (TTA), existing ones of which updates models with hand-designed unsupervised objectives over the full parameter space and mostly overlook preserving shared source knowledge and the reliability of adaptation signals. Drawing on molecular signaling cascades of memory updating in Drosophila, we propose Synapse Consolidation (SyCo), a parameter-efficient LLM adaptation method that updates low-rank adapters through Rac1 and MAPK pathways under the guidance of a structured TTA objective driven by problem understanding, process understanding, and source-domain guardrail. Rac1 confines plasticity to a tail-gradient subspace that is less critical for source knowledge, enabling rapid specialization while preserving source representations. MAPK uses a tiered controller to suppress noisy updates and consolidate useful adaptations under non-stationary streams. To model real deployments with multiple sources and continually emerging tasks, we introduce Multi-source Open-set Adaptation (MOA) setting, where a model is trained on multiple labeled source tasks and then adapts on open, non-stationary unlabeled test streams that mix seen and unseen tasks with partial overlap in label and intent space. Across 18 NLP datasets and the MOA setting, SyCo consistently outperforms strong baselines, achieving 78.31\% on unseen-task adaptation and 85.37\% on unseen-data shifts.
Problem

Research questions and friction points this paper is trying to address.

Test-Time Adaptation
Open-set Adaptation
Continual Distribution Shift
Large Language Models
Multi-source Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-Time Adaptation
Parameter-Efficient Fine-Tuning
Open-Set Recognition
Continual Learning
Low-Rank Adapters
🔎 Similar Papers
No similar papers found.