FoundIR-v2: Optimizing Pre-Training Data Mixtures for Image Restoration Foundation Model

πŸ“… 2025-12-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited generalization capability of universal image restoration models caused by imbalanced pretraining data mixture ratios. We propose a diffusion-based foundation model for multi-task image restoration. Methodologically, we introduce a novel data-balancing scheduling paradigm and an Mixture-of-Experts (MoE)-driven dynamic diffusion prior allocation mechanism, jointly modeling data mixture statistics and task semantics to enable task-adaptive data composition and prior selection. Additionally, we incorporate a dynamic curriculum learning strategy to optimize training. The model uniformly supports over 50 real-world image restoration subtasks. It achieves significant improvements over existing state-of-the-art methods in both cross-task consistency and overall performance, empirically validating the critical role of joint data–prior optimization in enhancing universal restoration capability.

Technology Category

Application Category

πŸ“ Abstract
Recent studies have witnessed significant advances in image restoration foundation models driven by improvements in the scale and quality of pre-training data. In this work, we find that the data mixture proportions from different restoration tasks are also a critical factor directly determining the overall performance of all-in-one image restoration models. To this end, we propose a high-capacity diffusion-based image restoration foundation model, FoundIR-v2, which adopts a data equilibrium scheduling paradigm to dynamically optimize the proportions of mixed training datasets from different tasks. By leveraging the data mixing law, our method ensures a balanced dataset composition, enabling the model to achieve consistent generalization and comprehensive performance across diverse tasks. Furthermore, we introduce an effective Mixture-of-Experts (MoE)-driven scheduler into generative pre-training to flexibly allocate task-adaptive diffusion priors for each restoration task, accounting for the distinct degradation forms and levels exhibited by different tasks. Extensive experiments demonstrate that our method can address over 50 sub-tasks across a broader scope of real-world scenarios and achieves favorable performance against state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Optimizes pre-training data mixtures for image restoration models
Balances dataset composition across diverse restoration tasks
Allocates task-adaptive diffusion priors for varied degradations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data equilibrium scheduling optimizes mixed training dataset proportions
Mixture-of-Experts scheduler allocates task-adaptive diffusion priors
Diffusion-based foundation model generalizes across diverse restoration tasks
πŸ”Ž Similar Papers
No similar papers found.