Complexity Experts are Task-Discriminative Learners for Any Image Restoration

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional Mixture-of-Experts (MoE) architectures exhibit inconsistent expert behavior in image restoration, leading to suboptimal task-expert alignment and inability to skip redundant experts—hindering inference efficiency. Method: We propose MoCE-IR, a Complexity-aware Mixture-of-Experts for Image Restoration, which replaces fixed-structure MoE with “complexity experts” featuring adaptive computational complexity and receptive fields, and introduces a lightweight, unsupervised, task-discriminative gating mechanism that enables automatic, complexity-driven expert selection. Contribution/Results: MoCE-IR supports end-to-end joint training for multi-degradation image restoration. Experiments demonstrate an average 42% reduction in redundant expert activations during inference, while achieving state-of-the-art PSNR and SSIM scores. The code and pretrained models are publicly released.

Technology Category

Application Category

📝 Abstract
Recent advancements in all-in-one image restoration models have revolutionized the ability to address diverse degradations through a unified framework. However, parameters tied to specific tasks often remain inactive for other tasks, making mixture-of-experts (MoE) architectures a natural extension. Despite this, MoEs often show inconsistent behavior, with some experts unexpectedly generalizing across tasks while others struggle within their intended scope. This hinders leveraging MoEs' computational benefits by bypassing irrelevant experts during inference. We attribute this undesired behavior to the uniform and rigid architecture of traditional MoEs. To address this, we introduce ``complexity experts"-- flexible expert blocks with varying computational complexity and receptive fields. A key challenge is assigning tasks to each expert, as degradation complexity is unknown in advance. Thus, we execute tasks with a simple bias toward lower complexity. To our surprise, this preference effectively drives task-specific allocation, assigning tasks to experts with the appropriate complexity. Extensive experiments validate our approach, demonstrating the ability to bypass irrelevant experts during inference while maintaining superior performance. The proposed MoCE-IR model outperforms state-of-the-art methods, affirming its efficiency and practical applicability. The source code and models are publicly available at href{https://eduardzamfir.github.io/moceir/}{ exttt{eduardzamfir.github.io/MoCE-IR/}}
Problem

Research questions and friction points this paper is trying to address.

Address inconsistent behavior in mixture-of-experts (MoE) architectures.
Assign tasks to experts with appropriate computational complexity.
Improve efficiency and performance in all-in-one image restoration models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces flexible complexity experts with varying computational demands.
Uses task-specific allocation based on degradation complexity.
Achieves superior performance by bypassing irrelevant experts.
🔎 Similar Papers
No similar papers found.