Context is All You Need

📅 2026-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the generalization challenge arising from distribution shifts between training and test data by proposing CONTXT, a lightweight test-time adaptation method that modulates intermediate neural representations through additive and multiplicative feature transformations. Designed for minimal computational overhead and broad applicability, CONTXT enables seamless integration into diverse architectures—including convolutional neural networks and large language models—without requiring retraining. The approach enhances model robustness and performance on unseen domains under both domain generalization (DG) and test-time adaptation (TTA) settings, demonstrating consistent gains across discriminative and generative tasks through its simple yet effective context-aware modulation mechanism.
📝 Abstract
Artificial Neural Networks (ANNs) are increasingly deployed across diverse real-world settings, where they must operate under data distributions that differ from those seen during training. This challenge is central to Domain Generalization (DG), which trains models to generalize to unseen domains without target data, and Test-Time Adaptation (TTA), which improves robustness by adapting to unlabeled test data at deployment. Existing approaches to address these challenges are often complex, resource-intensive, and difficult to scale. We introduce CONTXT (Contextual augmentatiOn for Neural feaTure X Transforms), a simple and intuitive method for contextual adaptation. CONTXT modulates internal representations using simple additive and multiplicative feature transforms. Within a TTA setting, it yields consistent gains across discriminative tasks (e.g., ANN/CNN classification) and generative models (e.g., LLMs). The method is lightweight, easy to integrate, and incurs minimal overhead, enabling robust performance under domain shift without added complexity. More broadly, CONTXT provides a compact way to steer information flow and neural processing without retraining.
Problem

Research questions and friction points this paper is trying to address.

Domain Generalization
Test-Time Adaptation
distribution shift
unseen domains
model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-Time Adaptation
Domain Generalization
Feature Modulation
Contextual Adaptation
Lightweight Adaptation
🔎 Similar Papers
No similar papers found.
J
Jean Erik Delanois
Department of Computer Science & Engineering, University of California, San Diego, La Jolla, California, USA
S
Shruti Joshi
Department of Medicine, University of California, San Diego, La Jolla, California, USA
R
Ryan Golden
Department of Medicine, University of California, San Diego, La Jolla, California, USA
T
Teresa Nick
Microsoft Corporation, Redmond, WA, USA
Maxim Bazhenov
Maxim Bazhenov
Professor of Medicine
computational neuroscienceartificial intelligenceolfactory codingepileptogenesissleep and memory consolidation