MoETTA: Test-Time Adaptation Under Mixed Distribution Shifts with MoE-LayerNorm

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world test data often suffer from multi-source, heterogeneous mixed distribution shifts, rendering existing test-time adaptation (TTA) methods ineffective due to their reliance on a single adaptation path; moreover, no realistic benchmark exists for rigorous evaluation. Method: We propose a Mixture-of-Experts (MoE)-based TTA framework: (i) decoupled expert networks enable multi-directional gradient updates; (ii) an entropy-driven dynamic gating mechanism selects the most suitable expert per sample; and (iii) LayerNorm and entropy minimization enhance adaptation stability. Contribution/Results: To address the evaluation gap, we introduce potpourri/potpourri+, the first benchmark explicitly designed for mixed distribution shifts, and present the first systematic analysis of catastrophic forgetting in TTA. Extensive experiments across three mixed-shift settings demonstrate significant improvements over state-of-the-art methods, validating the effectiveness and robustness of multi-path adaptation in complex deployment environments.

Technology Category

Application Category

📝 Abstract
Test-Time adaptation (TTA) has proven effective in mitigating performance drops under single-domain distribution shifts by updating model parameters during inference. However, real-world deployments often involve mixed distribution shifts, where test samples are affected by diverse and potentially conflicting domain factors, posing significant challenges even for SOTA TTA methods. A key limitation in existing approaches is their reliance on a unified adaptation path, which fails to account for the fact that optimal gradient directions can vary significantly across different domains. Moreover, current benchmarks focus only on synthetic or homogeneous shifts, failing to capture the complexity of real-world heterogeneous mixed distribution shifts. To address this, we propose MoETTA, a novel entropy-based TTA framework that integrates the Mixture-of-Experts (MoE) architecture. Rather than enforcing a single parameter update rule for all test samples, MoETTA introduces a set of structurally decoupled experts, enabling adaptation along diverse gradient directions. This design allows the model to better accommodate heterogeneous shifts through flexible and disentangled parameter updates. To simulate realistic deployment conditions, we introduce two new benchmarks: potpourri and potpourri+. While classical settings focus solely on synthetic corruptions, potpourri encompasses a broader range of domain shifts--including natural, artistic, and adversarial distortions--capturing more realistic deployment challenges. Additionally, potpourri+ further includes source-domain samples to evaluate robustness against catastrophic forgetting. Extensive experiments across three mixed distribution shifts settings show that MoETTA consistently outperforms strong baselines, establishing SOTA performance and highlighting the benefit of modeling multiple adaptation directions via expert-level diversity.
Problem

Research questions and friction points this paper is trying to address.

Addresses test-time adaptation challenges under mixed distribution shifts
Overcomes limitations of unified adaptation paths with expert diversity
Introduces realistic benchmarks for heterogeneous real-world deployment scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts architecture enables diverse gradient directions
Structurally decoupled experts handle heterogeneous distribution shifts
Entropy-based framework adapts parameters via flexible expert selection
🔎 Similar Papers
No similar papers found.
X
Xiao Fan
College of Computer Science and Technology, Tongji University, Shanghai, China
Jingyan Jiang
Jingyan Jiang
Shen Zhen Technology University
Test-time adaptation, Embodied AI,Machine learning system
Z
Zhaoru Chen
College of Application Technology, Shenzhen University, Shenzhen, China
Fanding Huang
Fanding Huang
Tsinghua University
Semantic SegmentationTest-time AdaptationLarge Language Models
X
Xiao Chen
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Q
Qinting Jiang
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
B
Bowen Zhang
School of Artificial Intelligence, Shenzhen Technology University, Shenzhen, China
X
Xing Tang
School of Artificial Intelligence, Shenzhen Technology University, Shenzhen, China
Z
Zhi Wang
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China